We have an ongoing problem with message writes to MSMQ being very slow. Our queue is on Windows Server 2008 SP2. The queue is a public queue, addressed at "servername\queuename".
The code to send a message to the queue is
MessageQueue queue = new MessageQueue(Settings.Default.DefaultDestinationQueue);
queue.Formatter = new BinaryMessageFormatter();
queue.Send(message);
The message that we're trying to send is simply a "PublishMessage", as follows:
[Serializable]
public class PublishMessage {
public int EntryId {get; set; }
}
We're seeing the messages actually reach the queue, but logging before and after shows that it's generally taking in excess of 1 minute for each message.
At this time, we can't see anything wrong with our queue configuration, but are not queuing experts--this is the first addition to this application. Anyone have any ideas?
Edit: Our server is running SP 2 (not SP1 that I originally stated). Running an instance directly on the machine hosting the queue is fast, any other is not.
Note: Crossposted this at https://serverfault.com/questions/272242/msmq-write-taking-1-minute
Our problem turned out to be an infrastructure issue with servers not properly authenticating. Our sysadmin finally got it straightened out by reconfiguring the server hosting the queue.
Is the queue transactional or non-transactional? Try toggling this setting and see if you get the same behavior.
How much space is allocated for the queue? Try increasing the space, and/or moving its storage to a different drive (on a different physical spindle).
Are there messages in the system queues? Check the system queues for messages. Also, try purging all the messages from the system queues and re-run.
Related
What are the best practices regarding sessions in an application that is designed to fetch messages from a MQ server every 5 seconds?
Should I keep one session open for the whole time (could be weeks or longer), or better open a session, fetch the messages, and then close the session again?
I am using the .net IBM XMS v8 client library.
Adding to what #Attila Repasi's response, I would go for a consumer with message listener attached. The message listener would get called whenever a message needs to be delivered to application. This avoids application explicitly calling receive() to retrieve messages from queue and waste CPU cycles if there are no messages on the queue.
Check the XMS.NET best practices
Keep the connection and session open for a longer period if your application sends or receive message continuously. Creation of connection or session is a time consuming operation and consumes lot of resources and involves network flow (for client connections).
I'm not sure what you are calling a session, but typically applications connect to the queue manager serving them once at start, and keep that connection up while running.
I don't see a reason to disconnect just to reconnect 5 seconds later.
As for keeping the queues open, it depends on your environment.
If there are no special circumstances, I would keep the queue open.
I think the most worth thinking about is how you issue the GETs to read the messages.
I has some strange behaviour on production deployment for azure queue messages:
Some of the messages in queues appears with big delay - minutes, and sometimes 10 minutes.
Befere you ask about setting delayTimeout when we put message to queue - we do not set delayTimeout for that message, so message should appear almost immedeatly after it was placed in queue.
At that moments we do not have a big load. So my instances has no work load, and able to process message fast, but they just don't appear.
Our service process millions of messages per month, we able to identify that 10-50 messages processed with very big delay, by that we fail SLA in front of our customers.
Does anyone have any idea what can be reason?
How to overcome?
Did anyone faced similar issues?
Some general ideas for troubleshooting:
Are you certain that the message was queued up for processing - ie the queue.addmessage operation returned successfully and then you are waiting 10 minutes - meaning you can rule out any client side retry policies etc as being the cause of the problem.
Is there any chance that the time calculation could be subject to some kind of clock skew problems. eg - if one of the worker roles pulling messages has its close out of sync with the other worker roles you could see this.
Is it possible that in the situations where the message is appearing to be delayed that a worker role responsible for pulling the messages is actually failing or crashing. If the client calls GetMessage but does not respond with an appropriate acknowledgement within the time specified by the invisibilityTimeout setting then the message will become visible again as the Queue Service assumes the client did not process the message. You could tell if this was a contributing factor by looking at the dequeue count on these messages that are taking longer. More information can be found here: http://msdn.microsoft.com/en-us/library/dd179474.aspx.
Is it possible that the number of workers you have pulling items from the queue is insufficient at certain times of the day and the delays are simply caused by the queue being populated faster than you can pull messages from the queue.
Have you enabled logging for queues and then looked to see if you can find the specific operations (look at e2elatency and serverlatency).
http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/. You should also enable client logging and try to determine if the client is having connectivity problems and the retry logic is possibly kicking in.
And finally if none of these appear to help can you please send me the server logs (and ideally the client side logs as well) along with your account information (no passwords) to JAHOGG at Microsoft dot com.
Jason
Azure Service bus has a property in the BrokeredMessage class called ScheduledEnqueueTimeUtc, it allows you to set a time for when the message is added to the queue (effectively creating a delay).
Are you sure that in your code your not setting this property, and this might be the cause for the delay?
You can find more info on this at this url: https://www.amido.com/azure-service-bus-how-to-delay-a-message-being-sent-to-the-queue/
If you are using WebJobs to process messages from the queue, it can be due to WebJobs configuration.
From an MSDN forum post by pranav rastogi:
Starting with 0.4.0-beta, the (WebJobs) SDK implements a random exponential back-off algorithm. As a result of this if there are no messages on the queue, the SDK will back off and start polling less frequently.
The following setting allows you to configure this behavior.
MaxPollingInterval for when a queue remains empty, the longest period of time to wait before checking for a message to. Default is 10min.
static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.MaxPollingInterval = TimeSpan.FromMinutes(1);
JobHost host = new JobHost(config);
host.RunAndBlock();
}
I have two MQ queue manager with same queue names configured. Both are configured to send data to different servers. Currently queue manager(QM1) is stopped(status Ended Immediately) and QM2 is running
Now my program opens the queue and sends data. It doesnot specify queue manager name. When I execute the program, MQ connection request returns error 2059.
My questions are:
What happens when multiple queue managers have same queue name?
How to tackle situation without changing the code?
Please forgive if the description is vague. It would be helpful if anyone provide links so that newbie like me can learn something.
Thanks
It would be helpful if could provide details on your application. Whether it's using server bindings or client mode connection to queue manager. What version of MQ are you using?
The below information is valid for MQ v7.x:
If you are using client mode then you can use multiple CONNNAMEs to connect. If one queue manager is down, your application will connect to next queue manager in CONNAME list. One of the simplest way to do when using client mode connection is to define MQSERVER environment variable and specify multiple CONNNAMEs.
SET MQSERVER=<channel name>/TCP/host1(port1), host2(port2)
For example when both queue managers are on local host:
SET MQSERVER=MYSVRCONCHN/TCP/localhost(1414),localhost(1415)
In server bindings mode if queue manager name is not specified, then application will attempt to connect to the default queue manager. If the default queue manager is down, then 2059 is thrown.
Your explaination doesn't provide clarity about your requirements.
You wrote:
My questions are 1. What happens when multiple queue managers have same queue name.
Nothing. Its a normal scenario. Different queue managers may have queues with same name and it doesn't create any ambiguity. Although, scenario will be a little different when the queue managers are in same cluster and the queue is also a cluster queue. Then everything will depend on requirements and design.
You wrote:
2. How to tackle situation without changing the code
Run the queue manager which is stopped.
You wrote:
Now my program opens the queue and sends data. It doesnot specify
queue manager name.
What application are you using?For a client application, you access a queue using an object of queue manager.
I am asssuming that you are using an application(client) which doesn't take queue manager details from you, only takes queue details. And may be the queue manager is hard coded within the code. And it sends the message first to the queue of Queue manager 1 and then to queue manager 2. But, in your case queue manager 1 is down.
If above is the case, then the application's code needs to be changed. You should have exception handling in such a way that it executes the code for sending the message to the second queue manager even though the first lines of code throws error.
What is the best approach to connect to websphere mq v7.1 and clear all the messages of one or more specified queues using Java and JMS? Do I need to use Websphere MQ specific java API? Thanks.
Like all good questions, "it depends."
The queue can be cleared with a command only if there are no open handles on the queue. In that case sending a PCF command to clear the queue is quite effective, but if there are open handles you get back an error. PCF commands are of course a Java feature and not JMS because they are proprietary to WebSphere MQ.
On the other hand, any program authorized to perform destructive gets off a queue can clear the queue. In this case, just loop over a get until you get the 2033 return code indicating the queue is empty. This can be performed using JMS or Java but both of these manage the input buffer for you. If the queue is REALLY deep then you end up moving all that data and if the app is client connected, you are moving it at network speed instead of in memory.
To get around this, you need to specify a minimal amount of buffer and as one of the GET options also specify MQGMO.TRUNCATED_MSG_ACCEPTED. This moves only the message header during the get calls and can be significantly faster.
Finally, if you are doing this programamtically and regardless of which method you use, spin off several threads and don't use syncpoint. You actually have to go out of your way to get exclusive input on a queue so once you get a session, just spawn many threads off of it. Close each thread gracefully and shut down the the session once all the threads are closed.
We are using IBM MQ and we are facing some serious problems regarding controlling its asynchronous delivery to its recipient.We are having some java listeners configured, now the problem is that we need to control the messages coming towards listener, because the messages coming to server are in millions count and server machine dont have that much capacity t process so many threads at a time, so is there any way like throttling on IBM MQ side where we can configure preetch limit like Apache MQ does?
or is there any other way to achieve this?
Currently we are closing connection with IBM MQ when some X limit has reached on listener, but doesen't seems to be efficient way.
Please guys help us out to solve this issue.
Generally with message queueing technologies like MQ the point of the queue is that the sender is decoupled from the receiver. If you're having trouble with message volumes then the answer is to let them queue up on the receiver queue and process them as best you can, not to throttle the sender.
The obvious answer is to limit the maximum number of threads that your listeners are allowed to take up. I'm assuming you're using some sort of MQ threadpool? What platform are you using that provides unlimited listener threads?
From your description, it almost sounds like you have some process running that - as soon as it detects a message in the queue - it reads the message, starts up a new thread and goes back and looks at the queue again. This is the WRONG approach.
You should have a defined number of process threads running (start with one and scale up as required, and within limits of your server) which read from the queue themselves. They would each open the queue in shared mode and either get-with-wait or do immediate get with a sleep if you get a MQRC 2033 (no messages in queue).
Hope that helps.
If you are running in the application server environment, then the maxPoolDepth property on the activationSpec will define the maximum ServerSessionPool size for the MDB - decreasing this will throttle the number messages being delivered concurrently.
Of course, if your MDB (or javax.jms.MessageListener in the JSE environment) does nothing but hand the message to something else (or, worse, just spawn an unmanaged Thread and start it) onMessage will spin rapidly and you can still encounter problems. So in that case you need to limit other resources too, e.g. via threadpool configuration.
Closing the connection to the QM is never an efficient way, as the MQCONN/MQDISC cycle is expensive.