How can I know when an MQ queue manager was restarted? - ibm-mq

I have a set of IBM MQ queue managers and would like to know when one of them was restarted or when it automatically failed over to the standby instance.
The queue managers are seated on AIX
Regards,

You can find this information from the AMQERR01.LOG of the queue manager or by running DIS QMSTATUS ALL.

Queue Manager START events are emitted (provided they are enabled), and include information about whether it was a multi-instance failover. See https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.1.0/com.ibm.mq.ref.doc/q049990_.htm

Related

IBM MQ - MQ Client Connecting to a MQ Cluster

I have an MQ cluster which has two full repository queue managers and two partial repository queue managers. A Java client needs to connect to this cluster and should PUT/GET messages. Which queue manager should I expose to the Java client, is it a queue manager from full repository or is it any one from partial repository? is there a kind of a pattern/anti-pattern/advantage/disadvantage exposing any of those?
Any advice would be very much appreciated.
Regards,
Yasothar
One pattern that is so often used that it has been adopted as a full-fledged IBM MQ feature is known as a Uniform Cluster. It is so called because every queue manager in the cluster hosts the same queues, and so it does not matter which queue manager the client application connects to.
However, to answer your direct question, the client should be given a CCDT which contains entries for each queue manager that hosts a copy of the queue the client application is using. This could be EVERY queue manager in the cluster, as per the Uniform Cluster pattern, or just some of them. It does not need to be only partials or only full repository queue managers, but instead you are looking for those hosting the queue.
If you are not sure what that set of queue managers is, you could go to one of your full repository queue managers and type in the following MQSC command (using runmqsc or your favourite other MQSC tool):
DISPLAY QCLUSTER(name-of-queue-used-by-application) CLUSQMGR
and the resultant list will show the hosting queue manager names in the CLUSQMGR field.

WebSphere Remote MQ Clustering

I have two machines - Machine 1 and Machine 2. Machine 1 has one queue manager QM1 and a broker(integration server) BR1 and Machine 2 has one queue manager QM2 and a broker(integration server) BR2 . I want to make a cluster between QM1 AND QM2. I had created a remote cluster queue named INVENTQ in QM2 .The problem is that I am able to successfully post any message on any queue manager and I am able to see the corresponding message on INVENTQ in QM2 . But I want the architecture to be in such a way that I am able to receive the message from the queue from any of the queue managers in addition to the queue manager in which the queue INVENTQ is created i.e QM2 . Can anybody guide me in this ?
MQ does not have a 'remote get' capability - ie you cannot use local bindings to a queue manager and get a message from another queue manager. If you want to do this, you need to use client bindings to go to the queue manger where the message resides directly.
At MQPUT time, a decision has to be made (on the putting queue manager), where to forward the message to (e.g. which local queue, or which transmission queue to pass it to another queue manager).
In a cluster setup, if you have a queue defined on one queue manager and put it the cluster, anyone from any of the clustered queue managers can put to it as though it was a local queue. However their MQPUTs result in the message arriving (via the cluster channels), onto the one particular instance. Therefore from a different queue manager whilst you can put the message to the queue, you cannot get it.
You could have a queue with the same name defined on multiple queue managers and clustered, as per #JoshMc's suggestion, but this means that at MQPUT time, the message is routed to one, and only one, instance of that queue - if it was routed to the remote queue manager clustered definition you still would not be able to get it from the local queue manager. Imagine you had a cluster of 3 qmgrs. You can create a clustered queue called 'FRED' in 2 of them. All of them can put to FRED - but 2 of them will default to put to their local queue only (unless you set CLWLUSEQ=ANY), the other will (usually) alternate between the 2 remote instances. Each queue will definitely have different messages on.
https://www.ibm.com/developerworks/community/blogs/messaging/entry/Undestanding_on_MQ_Cluster_Work_Load_Management_Algorithm_and_Attributes?lang=en

MQ System w/ brokers - Any way to examine everything

I have inherited an IMB MQ (V6) system that has multiple brokers. Is there a way to explore everything succinctly ?
i.e. I know what queue managers are running, so without "runmqsc"ing each and every manager, how can i find broker names, listeners, etc ?
There is the Explorer running but again points to knowing the manager and port to have it connect successfully.
For MQ, the dmpmqcfg command can be useful to output your configuration info to a file.
For the broker, try the mqsilist command to list installed brokers and their associated resources.
this webpage may be of help to you:
Performing health checks for WebSphere Message Broker
http://www.ibm.com/developerworks/websphere/library/techarticles/0801_cui/0801_cui.html
To work out which queue managers are running on your machine, use thedspmq command. Then you'll know each queue manager and can runmqsc to each one, or point MQ Explorer to each one, or whatever you need to do next.

Why does WAS admin console queue configuration not accept asterisk (*) as Queue Manager entry?

I am configuring WAS to connect to MQ via CCDT, and should be using a Queue Manager Name with wild cards, i.e *QMan.
It is accepted in the Queue Connection Factory Screen and in the Activation Spec Screen, but it is not accepted in the Queue Configuration Screen. As shown on the below image, I am forced to leave the Queue Manager field blank. And my application is not received MQ messages. I am suspecting this might be the reason.
Any ideas why I cannot configure the Queue Manager in Queue screen? And what is the common problem if message listener is not receiving, even if the MQPUT is working.
I had double checked my CCDT configuration in Activation Spec, and have check the jndi names, everything is configured correctly.
Also note that it is working if I connected directly to the MQ via host/port etc. But I have to use CCDT to utilize our MQ cluster.
The Queue Manager (or Queue Sharing Group) name on the JMS Queue panel defines where the queue is located and not how you connect to it. It is the Queue Manager name in the JMS connection factory or activation specification that defines which queue manager your application connections to.
So it is correct that you can't enter a * in this box.
If the connection is not working when using a CCDT then it is likely to be another problem that this Queue Manager name box. Note: you can't use an XA connection with CCDTs due to the fact that a CCDT won't guarantee you will connect back to the same queue manager in the event of XA recovery.

Can't connect Websphere MQ Queue Manager

I'm a beginner on WebSphere MQ, I was working on MQ 6 and it was working fine, but now I've installed MQ 7.1 and when I try to create a new Queue Manager I can do it But it can't connect and it gives me the following error :
Do you have any idea about that? Thank you :)
You can look up any WebSphere MQ error code if either the WebSphere MQ Client or Server are installed using the mqrc command. In this case:
C:\Users\MUSR_MQADMIN>mqrc 2059
2059 0x0000080b MQRC_Q_MGR_NOT_AVAILABLE
The 2059 usually indicates that the listener is not running or the queue manager is down. There's a different error code if the listener is running and the QMgr name is wrong and another one if the connection is made to the right QMgr but the channel name is wrong. Sometimes you can get a 2059 if the channel was closed at the server side by an exit but since you didn't mention any exits, I'm assuming in this case that its listener problem.
Hopefully by now you are defining a listener object rather than using inetd or the runmqlsr command. Defining an object and setting it to start and stop under QMgr control is the most reliable way to configure it.
Once you get past the 2059, you should be aware that as of WMQ V7.1, the queue managers are secure by default and won't accept any remote client connections unless you explicitly authorize them. This is the opposite of the behavior of V6 where on a newly defined queue manager running a listener, anyone with a TCP route to it could administer it and remotely execute OS code as the mqm user. So I expect that the next problem you run into will be 2035 errors.
I've been told this means more work for the WMQ administrator. The only case in which that's true is if the V6 or earlier queue manager had been configured without security. If the tasks to secure a V7.0 QMgr are compared to the tasks to provision access on a v7.1 and higher QMgr are compared, provisioning access turnds out to be easier. However if you liked the V7.0 behavior, you can always alter the QMgr to disable CHLAUTH rules. Needless to say, leaving security enabled is highly encouraged.
To debug security errors, alter the QMgr to enable authorization events using the runmqsc command ALTER QMGR AUTHOREV(ENABLED). Next, download and install SupportPac MS0P into WebSphere MQ Explorer. Then when you do get a security error, use WebSphere MQ Explorer to look at the queue. Right-click on the queue and select the option to parse the event messages. This will tell you in excruciating detail all the information you need to debug the authorization error.
Finally, if you wish to read up on the new security features, go to t-rob.net/links and look at the conference presentations there. There are also some articles indexed if you scroll down.
In the screen-shot, I see hostname "127.0.0.1" and port # 1414. If it is a local queue manager then connect directly to it.
Also, each queue manager MUST use a unique port number. If you had it working with WMQ v6 queue manager, is this the same queue manager? If not, then make sure each queue manager uses a different port number (i.e. 1415, 1416, etc...)
I got same problem. but i resolved this by :
1. created a listener manually (define lstr(lstr1) port(xxxx) control(qmgr)
2. setmqaut mcauser('mqm').

Resources