Control flow throughput - ibm-integration-bus

I have a flow that receives a message (a list) with 400 records. For each record it creates a message and save it to a MQ Queue (Q-Records-IN).
There's another flow that's watching the queue (Q-Records-IN) and for each message, process it.
On this case, the first flow is very fast so it creates the 400 message quickly.
I need to control the throughput of the second flow, limiting it to process in the maximum 10 messages simultaneously.
How can I do this?
I really appreciate any help.
Thanks

Related

How we can write single message(not batch) fast in kafka?

I am new to Golang and Kafka and I am using segmentio kafka-go to connect to Kafka server using Golang. As of now I want to push every event of user in Kafka, so I want to push single message(and not in batch), but since the write operation provided by this library takes same time for either batch or single message, it is taking a lot of time. Is there any way of writing single message fast so that i can push million events in kafka in less time?
I have tested it for single message, and batch messages, it is taking same time (min was 10ms).
I think your problem is just the WriterConfig.
For example, if your config looks like the example on segmentio/kafka-go docs:
w := kafka.NewWriter(kafka.WriterConfig{
Brokers: []string{"localhost:9092"},
Topic: "topic-A",
Balancer: &kafka.LeastBytes{},
})
You could try setting batch size and batch timeout:
w := kafka.NewWriter(kafka.WriterConfig{
Brokers: []string{"localhost:9092"},
Topic: "topic-A",
Balancer: &kafka.LeastBytes{},
BatchSize: 1,
BatchTimeout: 10 * time.Millisecond,
})
It happens because kafka-go waits by default 1 second until the batch reach the maximum size, which is by default 100 messages, as we can see in the code.
Hope it helps you.
Update: Be aware that sending the messages one by one slows the process.
For example: sending 100 messages in batch took on my computer 0.0107s. Sending the same 100 messages one by one took 0.0244s.
I don't know much about golang. But the following function using Writer.WriteMessages has synchronous send option.
Writing fast (using sync send) actually depends upon your Network Roundtrip time i.e, the time taken to put the message to Kafka plus the time taken to get the acknowledgement from Kafka.
If you are using sync send, then your send will block until acknowledgement is received.
So, to make it fast, one way is to reduce the acknowledgements. It is better to set it to 1 (meaning, that the leader has written the message to its log but it is not replicated to the followers). But this can cause loss if the leader goes down and the message is not replicated.
So, you can set it to acks=all and change the min.insync.replicas=2 on the topic. The lesser the value the faster your send() returns and the faster it can push the next message to Kafka.

Spring JMS Websphere MQ open input count issue

I am using Spring 3.2.8 with JDK 6 and Websphere MQ 7.5.0.5. In my application I am making some jms calls using jmsTemplate via ThreadPool. First I faced condition that "Current queue depth" count increases as I hit jms calls. I tracked all objects I am initiating via ThreadPool and interrupt or cancel all threads/future objects. So this "Current queue depth" count controlled.
Now problem is "Open input count" value increases nearly to the number of requests I am sending. When I stops my server this count becomes 0.
In all this case I am able to send request and get response till count of 80 and my ThreadPool size is 30. After reaching request count somewhere to 80 I keep receiving error of future object rejections and not able to receive responses. In fact null responses receive for remaining calls.
Please suggest.
I am using queue in my application and filter on correlation id has been applied. I read more on it and found when we make a call to jmsTemplate.receiveSelected (queue, filter) then this has serious impact on performance. Once I removed this filter the thread conjunction issue resolved. But now filtering is still a problem for me.
Now I will be applying filter in a different way with some limitation of the application but not using receiveSelected instead now I am using jmsTemplate.receive.
Update on 14-Sep
All - I find this as a solution and like to post here.
One of my colleague helped in rectifying this issue which is great help. What we observed after debugging that if cacheConsumer is true then based on combination of
queue + message-selector + session
consumers are cached by Spring. And even calling close() method does not do any thing; basically empty method and causing thread to be hanged/stuck.
After setting cacheConsumer to false, I reverted my code back to original i.e. jmsTemplate.receiveSelected (destination, messageSelector), now when I hit 100 request count of threads only increased between 5 to 10 during multiple iterations of test.
So - this property need to be used carefully.
hope this helps !!
First I faced condition that "Current queue depth" count increases as
I hit jms calls. I tracked all objects I am initiating via ThreadPool
and interrupt or cancel all threads/future objects.
I have no idea what you are talking about but you should NOT be using/monitoring the 'current queue depth' value from your application. Bad, bad design. Only MQ monitoring tools should be using it.
Now problem is "Open input count" value increases nearly to the number
of requests I am sending. When I stops my server this count becomes 0.
Bad programming. You are 'opening a queue then putting a message' over and over and over again. How about you put some code to CLOSE the queue or better yet, REUSE the open queue!!!!!!!

read messages from JMS MQ or In-Memory Message store by count

I want to read messages from JMS MQ or In-memory message store based on count.
Like I want to start reading the messages when the message count is 10, until that i want the message processor to be idle.
I want this to be done using WSO2 ESB.
Can someone please help me?
Thanks.
I'm not familiar with wso2, but from an MQ perspective, the way to do this would be to trigger the application to run once there are 10 messages on the queue. There are trigger settings for this, specifically TRIGTYPE(DEPTH).
To expand on Morag's answer, I doubt that WS02 has built-in triggers that would monitor the queue for depth before reading messages. I suspect it just listens on a queue and processes messages as they arrive. I also doubt that you can use MQ's triggering mechanism to directly execute the flow conveniently based on depth. So although triggering is a great answer, you need a bit of glue code to make that work.
Conveniently, there's a tutorial that provides almost all the information necessary to do this. Please see Mission:Messaging: Easing administration and debugging with circular queues for details. That article has the scripts necessary to make the Q program work with MQ triggering. You just need to make a couple changes:
Instead of sending a command to Q to delete messages, send a command to move them.
Ditch the math that calculates how many messages to delete and either move them in batches of 10, or else move all messages until the queue drains. In the latter case, make sure to tell Q to wait for any stragglers.
Here's what it looks like when completed: The incoming messages land on some queue other than the WS02 input queue. That queue is triggered based on depth so that the Q program (SupportPac MA01) copies the messages to the real WS02 input queue. After the messages are copied, the glue code resets the trigger. This continues until there are less than 10 messages on the queue, at which time the cycle idles.
I got it by pushing the message to db and get as per the count required as in this answer of me take a look at my answer

WMQ Message Logging Scenarios v 7.5

In the following scenario, i'm curious as to what happens as it relates to what's in the active LOG files of the queue manager in question. Linear Logging is being used.
What activity (if any) is experienced by the MQ Active LOGS, during a scenario where a queue with say, 100 messages, is being READ with a JMS context attribute (looking for a specific message) -- that, for the case of this arguement, it will NEVER find. All messages are read off the queue, but none are committed. The messages therefore were never actually deleted from the queue; does the queue manager, however, record such GET operations so as to recover these "in flight" conditions, should the queue manager Crash while this is happening? We recently experienced a situation where the dequeue rate from a specific queue was in the 4000-4500 msg / min range, while the queue depth was only about 2500. We surmise that more than 1 such process thread were trying to read off a JMS message by context (sort of like with correlation ID I suppose), but without any hope of ever actually finding a message it was looking for (due to a probable misconfiguration). During this time, the active LOGS filled up rapidly. Is it likely that such wanton dequeue rates as we saw were the culprit?
MQ writes log records for persistent messages during get and put. More details can be found here:
http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.dev.doc/q023070_.htm

Async Request-Response Algorithm with response time limit

I am writing a Message Handler for an ebXML message passing application. The message follow the Request-Response Pattern. The process is straightforward: The Sender sends a message, the Receiver receives the message and sends back a response. So far so good.
On receipt of a message, the Receiver has a set Time To Respond (TTR) to the message. This could be anywhere from seconds to hours/days.
My question is this: How should the Sender deal with the TTR? I need this to be an async process, as the TTR could be quite long (several days). How can I somehow count down the timer, but not tie up system resources for large periods of time. There could be large volumes of messages.
My initial idea is to have a "Waiting" Collection, to which the message Id is added, along with its TTR expiry time. I would then poll the collection on a regular basis. When the timer expires, the message Id would be moved to an "Expired" Collection and the message transaction would be terminated.
When the Sender receives a response, it can check the "Waiting" collection for its matching sent message, and confirm the response was received in time. The message would then be removed from the collection for the next stage of processing.
Does this sound like a robust solution. I am sure this is a solved problem, but there is precious little information about this type of algorithm. I plan to implement it in C#, but the implementation language is kind of irrelevant at this stage I think.
Thanks for your input
Depending on number of clients you can use persistent JMS queues. One queue per client ID. The message will stay in the queue until a client connects to it to retrieve it.
I'm not understanding the purpose of the TTR. Is it more of a client side measure to mean that if the response cannot be returned within certain time then just don't bother sending it? Or is it to be used on the server to schedule the work and do what's required now and push the requests with later response time to be done later?
It's a broad question...

Resources