What are the best practices regarding sessions in an application that is designed to fetch messages from a MQ server every 5 seconds?
Should I keep one session open for the whole time (could be weeks or longer), or better open a session, fetch the messages, and then close the session again?
I am using the .net IBM XMS v8 client library.
Adding to what #Attila Repasi's response, I would go for a consumer with message listener attached. The message listener would get called whenever a message needs to be delivered to application. This avoids application explicitly calling receive() to retrieve messages from queue and waste CPU cycles if there are no messages on the queue.
Check the XMS.NET best practices
Keep the connection and session open for a longer period if your application sends or receive message continuously. Creation of connection or session is a time consuming operation and consumes lot of resources and involves network flow (for client connections).
I'm not sure what you are calling a session, but typically applications connect to the queue manager serving them once at start, and keep that connection up while running.
I don't see a reason to disconnect just to reconnect 5 seconds later.
As for keeping the queues open, it depends on your environment.
If there are no special circumstances, I would keep the queue open.
I think the most worth thinking about is how you issue the GETs to read the messages.
Related
My application is spring boot micro service listening to a Rabbit MQ queue.
The queue receives messages from different sources.
The requirement is that when the application server is going down (this could happen because of many reasons, may be because we brought the site down, or we are deploying an updated software on to our application server) we would like the queue to process the current message. As of now, we lose the message that the queue is currently processing.
How can I achieve this?
The default shutdownTimeout is 5000ms; you can increase it.
You should not, however, lose any messages, it should be requeued (unless you are using AcknowledgeMode.NONE (which is generally a bad idea).
I'm designing a quite complicated system and was wondering what the best way is to put a jms consumer (activemq, vm protocol, non persitent) inside a netty handler.
Let me explain, i have several clients connecting to my netty server using websockets. For every client connection i create a jms consumer that listens for interesting messages on one or more topics. If a interesting message arrives i need to do a extra step (additional filtering) before sending the message to the client using the websocket.
Is the following a good way to do this:
inside a SimpleChannelInboundHandler i declare a private non static consumer
the consumer is initialized in channelActive
the consumer is destroyed in channelInactive
when a message is received by consumer i do the extra filter a send it using ctx.channel().write()
In this setup i'm a bit worried that the consumer might turn into slow consumer and slow everything down, cause the websocket goes over the internet.
I came up with a more complex one to decouple the "receiving of message by consumer" and "sending of message through a websocket".
inside a SimpleChannelInboundHandler i declare a private non static consumer
the consumer is initialized in channelActive
the consumer is destroyed in channelInactive
when a message is received by consumer i put it in a blockedqueue
every minute i let a thread (created for every client) look in the queue and send the found messages to the client using ctx.channel().write().
At this point i'm a bit worried about the extra thread per client.
Or is there maybe a better way to accomplish this task?
This is a classic slow consumer problem and the first step to resolving it is to determine what the appropriate action is when a slow consumer is detected. If it is acceptable that the slow consumer misses messages then the solution is some variation on dropping messages or unsubscribing them from the feed. For example, if it's acceptable that the client misses messages then, when one is received from JMS, check if the channel is writable. If it isn't, drop the message. If you want to give yourself a bit more of a buffer (although OS buffers are quite large) you can track the number of write completion future's that haven't completed (ie the messages haven't been written to the OS send buffer) and drop messages if there are too many outstanding write requests.
If the client may not miss messages, and is consistently slow, then the problem is more difficult. One option might be to divert messages to a JMS queue with a specific header value, then open a new consumer that reads messages from that queue using a JMS selector. This will put more load on the JMS server but might be appropriate for temporary slowness and hopefully it won't interfere with you main topic feeds. Alternatively you might want to stash the messages in a different store, such as a database, so you can poll for messages when they can be sent. If you do this right a single polling thread can cope with many clients (query for clients which have outstanding messages, then for each client, load a bunch of messages). However this isn't as convenient as using JMS.
I wouldn't go with option 2 because the blocking queue is only going to solve the problem temporarily, and you can achieve the same thing by tracking how many write operations are waiting to complete.
I'm looking for a way to schedule a MDB. My requirement is that the MDB is set to feed a system from the company. This system goes out for maintenance every night, but the other systems don't know about it and may keep trying to feed it. A persistent queue is great in the way that my messages could be pilled until system goes back online.
How could I manage that? I've run into that already: schedule a message driven bean to access a queue during certain times? but it uses java 7, and worst, message is lost if the server restarts (messages is taken out of the JMS Queue and kept in memory until timer process it).
Another use of this would be to implement a "retry" queue. In case of error I want to retry processing my message, but not immediately, after a certain amount time only.
Any ideas to keep my MDB offline for a certain amount of time?
Most versions of JBoss publish a management MBean that allows you to stop delivery on a MDB.
If you're using EJB3, however, they auto-start, so you will need to register a startup class to stop starting MDBs at boot time if boots occur in your MDB's blackout period. Once past that snafu, you can schedule a simple quartz job to start and stop the MDBs according to your delivery windows.
Well, it looks like there is no way to pause a MDB in a generic way. The best solution is, like most people will answer, to use the DLQ (or DMQ).
Now, if I want to introduce a timer on a message, I set the time to live of the producer to the amount of time I want the message to wait. Then I send it to a normal queue, lets say waitingQueue which has no consumer. After expiration, the message is sent to default destination (mq.sys.dmq for Glassfish MQ, make sure to create a jms resource with mq.sys.dmq as imqDestinationName). I have a MDB listening to the error queue and responsible of sending the message again. Now, if I want to "close" a queue for some time, when a message arrives in the queue, I check if current time is allowed or not. Just set the time to live to the amount of time before next opening hours and send it to waitingQueue.
The reason I didn't use it since the beginning is that I fell into a few pitfalls. Here are a few useful properties to set when using DMQ with Glassfish 3.1.1 and its embedded MQ.
imq.message.expiration.interval=1 that's for the poll interval on each queue before sending timed out messages to the DMQ. Default is 60 seconds. If like me you want to test your application with little latency, this is what you need.
What is the best approach to connect to websphere mq v7.1 and clear all the messages of one or more specified queues using Java and JMS? Do I need to use Websphere MQ specific java API? Thanks.
Like all good questions, "it depends."
The queue can be cleared with a command only if there are no open handles on the queue. In that case sending a PCF command to clear the queue is quite effective, but if there are open handles you get back an error. PCF commands are of course a Java feature and not JMS because they are proprietary to WebSphere MQ.
On the other hand, any program authorized to perform destructive gets off a queue can clear the queue. In this case, just loop over a get until you get the 2033 return code indicating the queue is empty. This can be performed using JMS or Java but both of these manage the input buffer for you. If the queue is REALLY deep then you end up moving all that data and if the app is client connected, you are moving it at network speed instead of in memory.
To get around this, you need to specify a minimal amount of buffer and as one of the GET options also specify MQGMO.TRUNCATED_MSG_ACCEPTED. This moves only the message header during the get calls and can be significantly faster.
Finally, if you are doing this programamtically and regardless of which method you use, spin off several threads and don't use syncpoint. You actually have to go out of your way to get exclusive input on a queue so once you get a session, just spawn many threads off of it. Close each thread gracefully and shut down the the session once all the threads are closed.
We are using IBM MQ and we are facing some serious problems regarding controlling its asynchronous delivery to its recipient.We are having some java listeners configured, now the problem is that we need to control the messages coming towards listener, because the messages coming to server are in millions count and server machine dont have that much capacity t process so many threads at a time, so is there any way like throttling on IBM MQ side where we can configure preetch limit like Apache MQ does?
or is there any other way to achieve this?
Currently we are closing connection with IBM MQ when some X limit has reached on listener, but doesen't seems to be efficient way.
Please guys help us out to solve this issue.
Generally with message queueing technologies like MQ the point of the queue is that the sender is decoupled from the receiver. If you're having trouble with message volumes then the answer is to let them queue up on the receiver queue and process them as best you can, not to throttle the sender.
The obvious answer is to limit the maximum number of threads that your listeners are allowed to take up. I'm assuming you're using some sort of MQ threadpool? What platform are you using that provides unlimited listener threads?
From your description, it almost sounds like you have some process running that - as soon as it detects a message in the queue - it reads the message, starts up a new thread and goes back and looks at the queue again. This is the WRONG approach.
You should have a defined number of process threads running (start with one and scale up as required, and within limits of your server) which read from the queue themselves. They would each open the queue in shared mode and either get-with-wait or do immediate get with a sleep if you get a MQRC 2033 (no messages in queue).
Hope that helps.
If you are running in the application server environment, then the maxPoolDepth property on the activationSpec will define the maximum ServerSessionPool size for the MDB - decreasing this will throttle the number messages being delivered concurrently.
Of course, if your MDB (or javax.jms.MessageListener in the JSE environment) does nothing but hand the message to something else (or, worse, just spawn an unmanaged Thread and start it) onMessage will spin rapidly and you can still encounter problems. So in that case you need to limit other resources too, e.g. via threadpool configuration.
Closing the connection to the QM is never an efficient way, as the MQCONN/MQDISC cycle is expensive.