I have a MDB that receives messages to process a batch (BatchProcessor). Instead of working on one item in the batch at a time, the BatchProcessor will create a BatchItemProcessor and using a Semaphore will allow a max number of items to be processed concurrently at a time. The BatchItemProcessors are stateless session beans and injected by the Glassfish container.
We need to send a SOAP message to another service whenever we finish processing a batch item with its status. The BatchItemProcessor creates/sends a message to a JMS topic. We have another MDB, StatusSender, which listens for this messages and creates/sends the SOAP messge to the other service.
The problem is that the SOAP messages aren't being sent until all the batch items have been dispatched by the BatchProcessor.
Looking at the logs, the BatchItemProcessors are being created in an EJB thread pool (e.g. name from logs: __ejb-thread-pool10) and the BatchProccessor and StatusSender are created in, what I am assuming is, the Glassfish MDB thread pool (e.g. name from log: p: thread-pool-1; w: 39).
I am assuming (because I can't find documentation to confirm/deny) that p: thread-pool-1; identifies the pool and w: 39 the actual thread. If this is true, then the BatchProcessor uses a completely different thread than the StatusSender since the thread identifiers are different (one for BatchProcess and multiple ones for StatusSender).
Which means I'm really confused as to why the BatchProcessor onMessage must complete before the StatusSender onMessage calls can be executed.
Looking at my Glassfish EJB Container/MDB Settings configuration (both default-config and server-config) I have the following (which I believe are defaults):
Initial and Min Pool Size = 0
Max Pool Size = 32
Pool Resize Quantity = 8
Idle Timeout = 600
If someone call help me with this issue it would be very appreciated. Also, any recommondations for a good resource (book, website, etc.) would also be appreciated.
UPDATE
My issue was related to some code logic which was causing the delay of StatusSender onMessage being executed. However, I would still appreciate any recommendation for a good reference on Glassfish and threads/pools.
Related
I have some limitation that I can only have 1 partition of kafka topic which I can listen and still need to improve throughput of message processing and send that message MQ in end during that processing
So once I receive message in kafkaListener, I have used Asycn for further processing (that is for storing message in Db and later post to MQ).
Issue I see Session cache is not working , as once I read from kafkalistner it opens up new thread to do further work and once it reaches to JMS send method, after certain point I end with MQ error "max connections reach channel capacity" I thin MQRC 2537
Not sure what would be issue and I am using com.ibm.mq /mq-jms-spring-boot-starter as dependency
I have set Session cache to 20 and Async to 30 , does this means it will still try to create 10 more JMS connection, if all 30 threads task comes at almost same time
My understanding to SessionCache is that , only that many max session connection will be created only ..and out of 30 ..10 threads need to wait for JMS session to be available
Please assist , we are using Spring boot
I think the problem here is that you end up with too many connections being opened. All the pooling is done in the background for you by spring-jms, and it appears that your pool size exceeds the number of connections that the channel is configured for.
You will need to apply a throttle, by reducing the connection pool. There are caching properties that you can set for mq-jms-spring-boot-starter. See https://github.com/ibm-messaging/mq-jms-spring
eg.
ibm.mq.pool.enabled=true
ibm.mq.pool.maxConnections=5
ibm.mq.pool.blockIfFull=true
ibm.mq.pool.blockIfFullTimeout=60
You will need to determine sensible values for your envrionment.
I am working on Bpel 12c. I have a producer BPEL process which loops for 10 times and publishes a message to a JMS queue for each iteration. I also have a BPEL process which consumes from the above JMS queue. Everything works perfectly. But my only concern is, If I got to EM Console and try to open the instance of the consumer, I am not seeing 10 separate instances instead I am seeing the instance of the producer and when I open this instance, it shows all the 10 consumers under it. This behaviour is Okay if the parent process invokes a child process. But here It is completely two separate process and why It groups like this? Imagine if there are 100 such messages, the flow will not open at all and cause memory issue. Please let me know if there is a way to modify this behaviour and Will i be able to see all 10 instances separately?
It looks like expected behaviour from Oracle SOA, if it know of an initiating composite then it shows the whole trail (Ie the produce and the consumption), it doesnt matter if you build the consumer in a seperate project/or a the same one it still links them.
Little bit of backgroud: I need to improve the performance of one of our batch framework. There, batch inputs are sent to a JMS queue. Further, at the queue endpoint, we have a MDB, which is consuming the messages. Now, what i suspect here that if there are large number of messages, there is no MDB instance available to consume the messages as all of them are held up in processing the previous messages. To improve this, i am thinking of implementing a threadpool in the MDB business logic so that once the MDB has received the message and deliver it to the thread, it gets free for consuming another message.
Now before implementing this, i want to monitor my JMS queues to check if the messages are really waiting in the queues or not. So i need to know if this monitoring can be done via some WAS admin console or some JMX application. My main purpose is to check the waiting time of each jms message in the queue.
First, you can set the number of processes (MDB instances) that will consumes the Q in parallel. The default is 10 (Per member of the cluster..).
With the console: Resources -> JMS -> activation specifications, Set "Maximum concurrent MDB invocations per endpoint" which is defined as `"The maximum number of endpoints to which messages are delivered concurrently."``
As for monitoring the Q and generating some load, you can have a look at JMSToolBox on sourceforge
In the "Destination information" dialog in JMSToolBox, you will also be able to see the number of concurrent consumers on the Q
Also if you want to measure the time spend by a message in the Q, just compute the difference between the current time and the JMSTimestamp JMS standard property from the message it is process by the MDB in the onMessage() method
We are using rabbitmq 3.0.1 on CentOS 6, and as a client Spring spring-rabbit version 1.1.2.RELEASE. (I know these aren't the latest versions, see later).
We send messages to rabbitmq via this client. These messages are initiated via an external rest call. Someone else calls our web service updates the database and sends the amqp message. I would like to be informed if rabbitmq blocks the client - for instance if the disk_free_limit threshold is reached.
Importantly, I would like to be informed in the same thread as that processing the web request, so that I can rollback the transaction.
Our web service can also update a database (within a transaction obviously). Normally, this works fine. However, under certain circumstances, rabbitmq can block our web server - the most obvious being when the disk_free_limit is reached. This blocks the web server Thread, indefinitely. The external caller of the web service will obviously time out after a sensible period, but the thread in our web service doesn't - it stays around, and keeps the resources, and importantly the transaction open.
The web server is blocking the thread because it is transactional. It isn't the initial message which is blocking, it is the commit. I assume that rabbitmq is blocking because it can't persist it or something like that. The thread is blocking until rabbitmq sends the commit ok message back. The bit of code is deep within the rabbitmq implementation - com.rabbitmq.client.impl.ChannelN
public Tx.CommitOk More ...txCommit() throws IOException
{
return (Tx.CommitOk) exnWrappingRpc(new Tx.Commit()).getMethod();
}
and this eventually calls the following method from com.rabbitmq.client.impl.AMQChannel
public T More ...getReply() throws ShutdownSignalException
{
return _blocker.uninterruptibleGetValue();
}
The preferable solution for this would be some sort of timeout on the txCommit - then I could throw an exception and fail the web service with a 500 or whatever. I can't find any way of doing this.
What I have found is:
addBlockedListener - this adds a listener on a message sent by rabbitmq when this it is blocked. This is good, but the message will is treated by another thread - so I can't fail the web service. Using this I can at least log the fact that rabbitmq is blocked, through syslog or whatever. However, this isn't available on the version that we run - we would have to upgrade to the latest. We would prefer not to do this because of the testing it would imply.
setConnectionTimeout(int) - this sets the connection timeout for the initial connection to rabbitmq. This doesn't apply in my case, because rabbitmq is up and running and accepts the connection.
AmqpTemplate.setReplyTimeout() - as shown above, this reply timeout does not apply to the commit.
I fully understand that this situation (disk_free_limit threshold is breached) is a situation which should not occur in a production system. However, I would like to be able to cope nicely with this situation so that my application behaves nicely when one of its components (rabbitmq) has a problem.
So, what other options do I have? Is there any way, short of rewriting portions of the spring amqp client or removing the transactionality of doing what I want?
We are using IBM MQ and we are facing some serious problems regarding controlling its asynchronous delivery to its recipient.We are having some java listeners configured, now the problem is that we need to control the messages coming towards listener, because the messages coming to server are in millions count and server machine dont have that much capacity t process so many threads at a time, so is there any way like throttling on IBM MQ side where we can configure preetch limit like Apache MQ does?
or is there any other way to achieve this?
Currently we are closing connection with IBM MQ when some X limit has reached on listener, but doesen't seems to be efficient way.
Please guys help us out to solve this issue.
Generally with message queueing technologies like MQ the point of the queue is that the sender is decoupled from the receiver. If you're having trouble with message volumes then the answer is to let them queue up on the receiver queue and process them as best you can, not to throttle the sender.
The obvious answer is to limit the maximum number of threads that your listeners are allowed to take up. I'm assuming you're using some sort of MQ threadpool? What platform are you using that provides unlimited listener threads?
From your description, it almost sounds like you have some process running that - as soon as it detects a message in the queue - it reads the message, starts up a new thread and goes back and looks at the queue again. This is the WRONG approach.
You should have a defined number of process threads running (start with one and scale up as required, and within limits of your server) which read from the queue themselves. They would each open the queue in shared mode and either get-with-wait or do immediate get with a sleep if you get a MQRC 2033 (no messages in queue).
Hope that helps.
If you are running in the application server environment, then the maxPoolDepth property on the activationSpec will define the maximum ServerSessionPool size for the MDB - decreasing this will throttle the number messages being delivered concurrently.
Of course, if your MDB (or javax.jms.MessageListener in the JSE environment) does nothing but hand the message to something else (or, worse, just spawn an unmanaged Thread and start it) onMessage will spin rapidly and you can still encounter problems. So in that case you need to limit other resources too, e.g. via threadpool configuration.
Closing the connection to the QM is never an efficient way, as the MQCONN/MQDISC cycle is expensive.