I have two MQ queue manager with same queue names configured. Both are configured to send data to different servers. Currently queue manager(QM1) is stopped(status Ended Immediately) and QM2 is running
Now my program opens the queue and sends data. It doesnot specify queue manager name. When I execute the program, MQ connection request returns error 2059.
My questions are:
What happens when multiple queue managers have same queue name?
How to tackle situation without changing the code?
Please forgive if the description is vague. It would be helpful if anyone provide links so that newbie like me can learn something.
Thanks
It would be helpful if could provide details on your application. Whether it's using server bindings or client mode connection to queue manager. What version of MQ are you using?
The below information is valid for MQ v7.x:
If you are using client mode then you can use multiple CONNNAMEs to connect. If one queue manager is down, your application will connect to next queue manager in CONNAME list. One of the simplest way to do when using client mode connection is to define MQSERVER environment variable and specify multiple CONNNAMEs.
SET MQSERVER=<channel name>/TCP/host1(port1), host2(port2)
For example when both queue managers are on local host:
SET MQSERVER=MYSVRCONCHN/TCP/localhost(1414),localhost(1415)
In server bindings mode if queue manager name is not specified, then application will attempt to connect to the default queue manager. If the default queue manager is down, then 2059 is thrown.
Your explaination doesn't provide clarity about your requirements.
You wrote:
My questions are 1. What happens when multiple queue managers have same queue name.
Nothing. Its a normal scenario. Different queue managers may have queues with same name and it doesn't create any ambiguity. Although, scenario will be a little different when the queue managers are in same cluster and the queue is also a cluster queue. Then everything will depend on requirements and design.
You wrote:
2. How to tackle situation without changing the code
Run the queue manager which is stopped.
You wrote:
Now my program opens the queue and sends data. It doesnot specify
queue manager name.
What application are you using?For a client application, you access a queue using an object of queue manager.
I am asssuming that you are using an application(client) which doesn't take queue manager details from you, only takes queue details. And may be the queue manager is hard coded within the code. And it sends the message first to the queue of Queue manager 1 and then to queue manager 2. But, in your case queue manager 1 is down.
If above is the case, then the application's code needs to be changed. You should have exception handling in such a way that it executes the code for sending the message to the second queue manager even though the first lines of code throws error.
Related
Need to have two outbound queues on two different servers and queue managers act as primary and secondary respectively if the sending to primary fails want to connect to secondary but as soon as primary become up application must send to primary any spring boot configuration can help? using websphere MQ.
For a Java JMS messaging application in Spring Boot, a connection name list allows for setting multiple host(port) endpoints as comma separated pairs.
The connection factory will then try these endpoints in turn to finds an available queue manager. Unsetting WMQConstants.WMQ_QUEUE_MANAGER means the connection factory will connect to the available queue manager on a given host(port) endpoint regardless of its name.
You'll need to think about the lifecycle of the connection factory as the connection will remain valid once it has been established. In the scenario where the connection factory is bound to the queue manager on hostB(1414) (the second in the list) and then the queue manager on hostA(1414) (the first in the list) becomes available again, nothing would change until the next connection attempt.
It's important to note that where the queue manager endpoints in the connection name list are unrelated, then the queues and messages available will not be the same. Multi-Instance queue managers allow a queue manager instance to failover between two host(port) endpoints. For container deployments, IBM MQ Native HA ensures your messaging applications are routed to an active instance of the queue manager
A CCDT allows for much more sophisticated connection matching options with IBM MQ over the connection name list method outlined above. More information on CCDT is available here. For JMS applications you'll need to set the connection factory to use CCDT e.g., cf.setStringProperty(WMQConstants.WMQ_CCDTURL, "") to the location of the CCDT JSON file.
It's probably worth me putting up an answer rather than more comments, though most of the ground has been covered by Rich's answer, augmented by Rich's, Morag's and my comments.
Client Reconnection seems the most natural fit for the first part of the use-case, and I'd suggest using a connection name list rather than going to the complexity of using a CCDT for this job. This will ensure that the application connects to your primary queue manager if it's available and the secondary queue manager if the primary isn't available, and will also move applications to the secondary if the primary fails.
Once a connection has been made, in general it won't move. The two exceptions are that in client reconnection configuration, a connection will be migrated to a different queue manager if communication to the first is broken, and in the context of uniform clusters connections may be requested to migrate to a queue manager to balance load over the cluster. There is no automatic mechanism that I can think of which would force all the connections back from your secondary queue manger to the primary - you'd need to either do something with the application, or in a client reconnect setup you could bounce the secondary queue manager.
(You could use this forum to request such a feature, but I can't promise either that the request would be accepted or that it would be rapidly acted on.)
If you want to discuss this further, I'd suggest that more dialogue is probably needed to help us understand your scenario properly, so that we can make the most helpful recommendations. The MQ discussion forum may be a useful place for such dialogue, as it doesn't fit well (IMHO) to the StackOverflow model.
I am wondering if MQ can be used as a state cache for monitoring? And is this a good idea or not?
In theory you can have many sources (monitoring agents) that detect problem states and distribute them to subscribers via an MQ system such as RabbitMQ. But has anyone heard of using MQ systems to cache the state, so when clients initialize, they read from the state queue before subscribing to new state messages? Is that a bad way to use MQ?
So to recap, a monitor would read current state from a state queue then setup a subscription queue to receive any new updates. And the state queue would be maintained by removing any alerts that are no longer valid by the monitoring agents that put the alert there to begin with.
Advantage would be decentralized notification and theoretically very salable by adding more mq systems to relay events.
I have a use case for Rabbit MQ that holds the last valid status of a system. When a new client of that system connects it receives the current status.
It is so simple to do!
You must use the Last Value Cache custom exchange https://github.com/simonmacmullen/rabbitmq-lvc-plugin
Once installed you send all your status messages to that exchange. Each client that needs the status information will create a queue that will have the most recent status delivered to that queue on instantiation. After that it will continue to receive status updates.
IBM MQ FTE uses such way for storing logs.
I think it is good idea, if you can prevent destination queue from overflow, because IBM MQ for example remove overdue messages only during GET call.
We have an environment where MQ acts as an interface between
Websites and Micro Focus. Sometimes a message gets stuck in a queue,
thereby blocking all the communications over that particular queue. If
the queue depth increases greatly, all the communication stops in the
queue manager.
When we check the status of queue, we see that microfocus process is present there.
Is there are way to automatically clear all applications connected to the queue?
I don't think its possible to close an applications handle on a given queue but you could have a script that runs a couple of MQSC commands against the queue manager to first get the connection identifier using the DISPLAY CONN command and then close the connection using the STOP CONN command. You could then setup a trigger on the queue that executes the script once a certain queue depth has been reached.
I was wondering if there is a way for you to configure a queue to automatically clear messages? We are striving to partially implement a component of our architecture and want to be able to send to the queue, but have the queue automatically remove the messages that are being sent so that we don't have to run scripts, etc to perform the clean-up.
So far the only thing I have been able to find is to run CLEAR QLOCAL or set the messages to expire from the publishing application.
For you use case there are a few options in IBM MQ:
Create a QALIAS that points to a TOPIC object which has a topic string with no subscribers, messages put to the QA will just disappear.
Have the sending application set message expiry.
Use the IBM MQ CAPEXPRY feature to administratively force message expiry at the queue level.
Run a script to issue CLEAR QLOCAL against the queue. There can not be open handles on the queue for this to work.
Programmatically issue the equivalent PCF command to CLEAR QLOCAL against the queue. There cannot be open handles on the queue for this to work.
Run the IBM MQ dmpmqmsg utility against the queue to read and discard the messages.
I have a WebSphere MQ Queue Manager with transmission queue defined and I'm using API to get some information about the queue. When trying to inquire the queue (using .NET interface, but I believe this is not important here), I always receive an exception with reason 2042: MQRC_OBJECT_IN_USE - according to the documentation, this means that there's an exclusive lock at the queue. By some further investigation I can see that the process holding the lock is runmqchl - part of MQ Server.
Is the exclusive lock typical for transmission queues?
Or this means that there's something wrong with the queue or the transmission?
Even better, maybe there is a way to do some inquiries (read-only) to that locked queue (i.e. to get its depth or browse the messages) using API?
The SDR or SVR channel will always open the transmission queue for exclusive use. If the .Net client is getting an error because of this then it is asking for input rights as well as inquire. You can verify this by using WMQ Explorer to inquire on the queue and you will see that it has no problem getting queue attributes, depths, etc. So open for inquire but not browse or get and you should be fine.