Apache.NMS.AMQP setting prefetch size - amqp

I am using Apache.NMS.AMQP (v1.8.0) to connect to AWS managed ActiveMQ (v5.15.9) broker but am having problems with setting prefetch size for connection/consumer/destination (couldn't set custom value on either of them).
While digging through source code I've found that default prefetch value (DEFAULT_CREDITS) is set to 200.
To test this behavior I've written test that enqueues 220 messages on a single queue, creates two consumers and then consumes messages. The result was, as expected, that first consumer dequeued 200 messages and second dequeued 20 messages.
After that I was looking for a way to set prefetch size on my consumer without any success since LinkCredit property of ConsumerInfo class is readonly.
Since my usecase requires me to set one prefetch size for connection that is what I've tried next according to this documentation page, but no success. This are URLs that I've tried:
amqps://*my-broker-url*.amazonaws.com:5671?transport.prefetch=50
amqps://*my-broker-url*.amazonaws.com:5671?jms.prefetchPolicy.all=50
amqps://*my-broker-url*.amazonaws.com:5671?jms.prefetchPolicy.queuePrefetch=50
After trying everything stated above I've tried setting prefetch for my queue destinations by appending
?consumer.prefetchSize=50 to queue name. Resulting in something like this:
queue://TestQueue?consumer.prefetchSize=50
All of above attempts resulted with effective prefetch size of 200 (determined through test described above).
Is there any way to set custom prefetch size per connection when connecting to broker using AMQP? Is there any other way to configure broker than through query parameters stated on this documentation page?

From a quick read of the code there isn't any means of setting the consumer link credit in the NMS.AMQP client implementation at this time. This seems to be something that would need to be added as it currently seems to just use a default value to supply to the AmqpNetLite receiver link for auto refill.
Their issue reporter is here.

Related

Can I configure max tries for Spring Cloud Stream with RabbitMQ using DLQ

I'm working with Spring Cloud Stream and Rabbit, and I used the config defined here to set up a dead-letter queue (DLQ) and it works very nicely.
What I'd like to do is set a maximum amount of times a message goes to the DLQ before being discarded - is is possible to set this via config? If so, how? If not, what should I do to achieve this behaviour?
I'm looking for a code sample for the best answer, preferably in Kotlin (if relevant)
That depends whether you're using qurorum queues or not. I don't believe there's a default config for that without quorum queues. However, you should be able to store the redelivery count within a custom header or in the message itself.
Then if you set a MAX_REDELIVERY_COUNT constant in your application, you can check if the message exceeds the maximum number of redeliveries.
If you're not using quorum queues, I'd take a look at this answer:
How do I set a number of retry attempts in RabbitMQ?.
This answer has quite some good options.
However, when using quorum queues, you can set the delivery-limit option. More info on that can be found here: https://www.rabbitmq.com/quorum-queues.html#feature-matrix.
Edit 1: using custom headers
In order to publish a message with custom headers:
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("latitude", 51.5252949);
headers.put("longitude", -0.0905493);
channel.basicPublish(exchangeName, routingKey,
new AMQP.BasicProperties.Builder()
.headers(headers)
.build(),
messageBodyBytes);
As found on https://www.rabbitmq.com/api-guide.html#publishing.
The problem is that the headers can't be simply updated. However, you could do this with a workaround. Let's say you want a maximum of 5 retries per message. If the message can't be processed, send it to a DLX. If the message doesn't exceed the maximum retries, read the original headers of the message, update the custom retry count header and resend it to the original queue.
If the message gets in de DLX and does exceed the maximum retry count, send the message as is to the DLX with a different routing key, which is bound to a queue for the "definitive" dead messages.
That'd mean that you would get something like this in a simplified diagram:
This is just an idea, I don't know if it'll work for sure, but it's the best that I can think of in your situation.
Edit 2: using the autoBindDlq
It seems like the Spring Cloud Stream Binder for RabbitMQ has this option. In the docs as found on https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit, it says the following:
By using the optional autoBindDlq option, you can configure the binder to create and configure dead-letter queues (DLQs) (and a dead-letter exchange DLX, as well as routing infrastructure). By default, the dead letter queue has the name of the destination, appended with .dlq. If retry is enabled (maxAttempts > 1), failed messages are delivered to the DLQ after retries are exhausted. If retry is disabled (maxAttempts = 1), you should set requeueRejected to false (the default) so that failed messages are routed to the DLQ, instead of being re-queued. In addition, republishToDlq causes the binder to publish a failed message to the DLQ (instead of rejecting it). This feature lets additional information (such as the stack trace in the x-exception-stacktrace header) be added to the message in headers. See the frameMaxHeadroom property for information about truncated stack traces. This option does not need retry enabled. You can republish a failed message after just one attempt. Starting with version 1.2, you can configure the delivery mode of republished messages. See the republishDeliveryMode property.

503: Max Client Queue and Topic Endpoint Flow Exceeded

Sometimes I am getting the following error:
503: Max Client Queue and Topic Endpoint Flow Exceeded
What I need to configure to prevent such issue?
The number of "flows" is, roughly speaking, the number of endpoints to which you are subscribed. There are two types: ingress (for messages from your application into Solace) and egress (for messages from Solace into your application). You violated one of those limits. You can tell which by looking at the stack trace.
By default the limit on flows is 100. Before you increase this limit, ask yourself: are you really supposed to be subscribed to more than 100 queues/topics? If not, you may have a leak. Just as you wouldn't fix a memory leak by increasing memory, you shouldn't fix this leak by increasing the max flow. Are you forgetting to close your subscriptions? Are you using temporary queues? Despite their name, temporary queues last for the life of the client session unless you close them.
But if you really are supposed to be subscribed to so many endpoints, you may increase the max ingress and/or max egress. This can be done in SolAdmin by editing the Client Profile and selecting the Advanced Properties tab, or in solacectl by setting max-ingress or max-egress under configure/client-profile/message-spool (as explained here). (There is also max setting per message spool, but you are unlikely to have violated that.)
It looks like the "Max Egress Flows" setting in your client-profile has been exceeded. One egress flow will be used up for each endpoint that your application is binding to.
The "Max Egress Flows" setting can be located under the "Advanced Properties" tab, when you edit the client-profile.
We hit the same issue during our load test. With mere few hundred messages we started getting 503 error. We identified the issue was in our producer topic creation. Once we added caching to topic destination object and the issue was resolved.

Spring JMS Websphere MQ open input count issue

I am using Spring 3.2.8 with JDK 6 and Websphere MQ 7.5.0.5. In my application I am making some jms calls using jmsTemplate via ThreadPool. First I faced condition that "Current queue depth" count increases as I hit jms calls. I tracked all objects I am initiating via ThreadPool and interrupt or cancel all threads/future objects. So this "Current queue depth" count controlled.
Now problem is "Open input count" value increases nearly to the number of requests I am sending. When I stops my server this count becomes 0.
In all this case I am able to send request and get response till count of 80 and my ThreadPool size is 30. After reaching request count somewhere to 80 I keep receiving error of future object rejections and not able to receive responses. In fact null responses receive for remaining calls.
Please suggest.
I am using queue in my application and filter on correlation id has been applied. I read more on it and found when we make a call to jmsTemplate.receiveSelected (queue, filter) then this has serious impact on performance. Once I removed this filter the thread conjunction issue resolved. But now filtering is still a problem for me.
Now I will be applying filter in a different way with some limitation of the application but not using receiveSelected instead now I am using jmsTemplate.receive.
Update on 14-Sep
All - I find this as a solution and like to post here.
One of my colleague helped in rectifying this issue which is great help. What we observed after debugging that if cacheConsumer is true then based on combination of
queue + message-selector + session
consumers are cached by Spring. And even calling close() method does not do any thing; basically empty method and causing thread to be hanged/stuck.
After setting cacheConsumer to false, I reverted my code back to original i.e. jmsTemplate.receiveSelected (destination, messageSelector), now when I hit 100 request count of threads only increased between 5 to 10 during multiple iterations of test.
So - this property need to be used carefully.
hope this helps !!
First I faced condition that "Current queue depth" count increases as
I hit jms calls. I tracked all objects I am initiating via ThreadPool
and interrupt or cancel all threads/future objects.
I have no idea what you are talking about but you should NOT be using/monitoring the 'current queue depth' value from your application. Bad, bad design. Only MQ monitoring tools should be using it.
Now problem is "Open input count" value increases nearly to the number
of requests I am sending. When I stops my server this count becomes 0.
Bad programming. You are 'opening a queue then putting a message' over and over and over again. How about you put some code to CLOSE the queue or better yet, REUSE the open queue!!!!!!!

How to determine the value of `MaxMsgLength` of queue

I am trying to write simple string message into a queue. The MaxMsgLength property of queue is set as 4 kb. The message has 2700 characters and when I try to put into queue I am getting 2030 (07EE) (RC2030): MQRC_MSG_TOO_BIG_FOR_Q exception. I am not doing any special kind of encoding and hence whatever is default for Windows should be used.
I want to know how to determine the value that I should give in MaxMsgLength property. How to calculate that.
Please remember that the MaxMsgLength as specified in the queue definition includes not just the payload, but also the message header and any properties that you set. If you check the Infocenter MQ_* (String Lengths) page and look for MQ_MSG_HEADER_LENGTH you will see that the MQMD alone is 4000 bytes. So if you set the MaxMsgLength of the queue to 4k, the largest payload you can have is 96 bytes. If the queue in question is a transmission queue, you need the queue size plus the size of the MQXQH transmission queue header.
To specifically answer the question in the title of the post, you can find the MaxMsgLength in two ways. Visually, by displaying the queue attributes. Programmatically, add "Inquire" to the open options when opening the queue and use the MQInq API call. Then add the total of the MQMD, any properties that you add (including the XML structures that contain them but are not returned in the API calls that manipulate them) plus any headers such as RFH2 (if the queues are set to use that instead of native properties), MQXQH, MQDLQ, etc.
Not sure what language you are using in your application. Assuming it is C, check BufferLength parameter value you have specified on the MQPUT call.
This IBM MQ InfoCenter link explains the case where you can run into 2030 error and possible remedies.

ActiveMQ: Is MessageConsumer's selector process on the broker or client side?

Could someone please confirm if I'm right or wrong on this. It seems to me that "selector" operation is done within MessageConsumer implementation. (i.e. ALL messages are still dispatched from Message Broker to MessageConsumer and then "selector" operation are performed against those messages). The problem occurs when we have a bunch of messages that we are not interested in (i.e. not match our selector), those messages will eventually fill up MessageConsumer's internal queue due to prefetch or cache limit. As a result, we will not be able to receive any new messages, particularly the ones we're interested in with the selector.
So, is there a way to configure AMQ to perform the selector operation on MessageBroker side? Should I start looking at "interceptor" and create my own BrokerPlugin? Any advice on how to workaround this issue?
I'm really appreciate any answer.
Thanks,
Soonthorn A.
Selectors actually are applied at the broker, not on the client side. If your selector is sparse and the destination sees a lot of traffic its likely that the broker has not paged in messages that match the selector and you consumer won't see any matches until more messages are consumed from the destination.
The issue lies in the Destination Policy in play for your broker. By default the broker will only page in 200 message for a browser to avoid using up all available memory and avoid impacting overall performance. You can increase this number via your own DestinationPolicy in activemq.xml, see the documentation page here.

Resources