Have a requirement while sending message to queue (tibco ems), it should be any random number between 1000-3000, how to achieve this in JMeter?
Will updating the number of samples to aggregate to ${__Random(1000,3000,)} will work? Or there is any alternate solution for this?
Config Details
Want to send any number between 1000-3000 of messages to queue in every iteration.
You are correct, whatever number you will configured in number of samples to aggregate, that many number is going to pushed to Queue,
For example, I have given the value as 3 then it is going to post 3 messages to the Queue.
Note: In this case all the 3 messages will be identical
Related
I'm creating a load test where I try to sent three types of JSON messages through ActiveMQ topic to server. After sending first message I get 3 responses, sending second - get 2 responses according to business logic.
One iteration sequentially:
publish message1
consume 3 responses as a result of successful processing message1
publish message2
consume 2 responses as a result of successful processing message2
etcetera
I need to start 50 parallel iterations and not to confuse messages from different iterations. How can I do it?
I tried JMS selector but this one can filter messages by the headers only. I don't have any specific headers for each responses to get.
Can I filter messages, for example, by UUID? And how it can be implemented? I tried to find needed info on Internet but without results.
Will be very thankful for advices and help with it!
Yes, messages can be filtered by either header (fixed set of JMS header names) or by property (custom key-value pair).
JMSCorrelationID may be a good bet here. You can publish all messages for a given producer (or iteration) w/ the same JMSCorrelationID and then check the consumer counts that way.
ie.. for producer1 set: JMSCorrelationID = 'producer-1'
for producer2 set: JMSCorrelationID = 'producer-2'
I would like to process multiple messages at a time e.g. get 10 messages from the channel at a time and write them to a log file at once.
Given the scenario, can I write a service activator which will get messages in predefined set i.e. 5 or 10 messages and process it? If this is not possible then how to achieve this using Spring Integration.
That is exactly what you can get with the Aggregator. You can collect several messages to the group using simple expression like size() == 10. When the group is complete, the DefaultAggregatingMessageGroupProcessor emits a single message with the list of payloads of messages in the group. The result you can send to the service-activator for handling the batch at once.
UPDATE
Something like this:
.aggregate(aggregator -> aggregator
.correlationStrategy(message -> 1)
.releaseStrategy(group -> group.size() == 10)
.outputProcessor(g -> new GenericMessage<Collection<Message<?>>>(g.getMessages()))
.expireGroupsUponCompletion(true))
So, we correlate messages (group or buffer them) by the static 1 key.
The group (or buffer size is 10) and when we reach it we emit a single message which contains all the message from the group. After emitting the result we clean the store from this group to allow to form a new one for a fresh sequence of messages.
It depends on what is creating the messages in the first place; if a message-driven channel adapter, the concurrency in that adapter is the key.
For other message sources, you can use an ExecutorChannel as the input channel to the service activator, with an executor with a pool size of 10.
Depending on what is sending messages, you need to be careful about losing messages in the event of a server failure.
It's difficult to provide a general answer without more information about your application.
I have 5 input Queues, 5 message flows for each. After some processing messages from all queues go to one transport queue.
Is there a possibility to set a priority across queues, for example messages from input queue 1 will always be processed and put in the transport queue first?
It sounds like you need to set the message priority.
You set the priority of a message (in the Priority field of the MQMD
structure) when you put the message on a queue
... you can create messages having priorities between 0 (the lowest) and 9
(the highest).
Hi use the Sequence Node to done this task.
https://www.ibm.com/support/knowledgecenter/en/SSMKHH_10.0.0/com.ibm.etools.mft.doc/bc28010_.htm
Is there any way to count how many time a job is requeued (via Reject or Nak) without manually requeu the job?
I need to retry a job for 'n' time and then drop it after 'n' time.
ps : Currently I requeue a job manually (drop old job, create a new job with the exact content and an extra Counter header if the Counter is not there or the value is less than 'n')
There are redelivered message property that set to true when message redelivered one or more time.
If you want to track redelivery count or left redelivers number (aka hop limit or ttl in IP stack) you have to store that value in message body or headers (literally - consume message, modify it and then publish it modified back to broker).
There are also similar question with answer which may help you: How do I set a number of retry attempts in RabbitMQ?
In the case that the message was actually dead-lettered, you can check the contents of the x-death message header.
This would for example be the case when you reject/nack with requeue = false and the queue has an associated dead letter exchange.
In that case, the contents of this header is an array. Each element describes a failed delivery attempt, containing information such as the time it was attempted delivered, routing information, etc.
This works for RabbitMQ - I don't know if it is applicable to AMQP in general.
EDIT
Since I originally wrote this answer, the x-death header structure has been changed.
It is generally a very bad thing that headers changes format, but
in this particular case the reason was that the message size would grow indefinitely if the message was continuously dead-lettered.
I have therefore removed the piece of code that used to be here to get the no of deaths for a message.
It is still possible to get the number of deaths from the new header format.
Is there a way to setup a spring integration channel in such a way that lets say it only sends the messages to output channel once it has accumulated 50 incoming messages. To look at it from polling perspective, I want the polling process to be based on the number of messages instead of a fixed time interval .. somehow poll the previous channel possibly multiple times but only accept messages once it has enough to process
Use an <aggregator/> with a release-strategy-expression="size == 50" and a correlation-strategy-expression="'foo'" (and expire-groups-on-completion="true). The expire-groups setting allows the next group ('foo') to form.
Follow the aggregator with a simple <splitter /> (no expressions, just in/out channels).
The aggregator will accumulate messages until 50 arrive and then release them as a collection, and the splitter will split the collection back to single messages.
If you want to release based on size or elapsed time (release a short group if x seconds elapse) then configure a MessageGroupStoreReaper.