Could you please explain why there would be messages pending in the queue SYSTEM.CHLAUTH.DATA.QUEUE. There are 3 messages in the queue now.
what if these messages are deleted. Will there be any issues if we delete these messages.
Are those information messages about the channel authentication records?
Please suggest a resolution.
The IBM MQ v7.5 Knowledge Center page "Troubleshooting channel authentication records addresses the topic of what the SYSTEM.CHLAUTH.DATA.QUEUE is used for.
Behaviour of SET CHLAUTH command over queue manager restart
If the SYSTEM.CHLAUTH.DATA.QUEUE, has been deleted or altered in a way
that it is no longer accessible i.e. PUT(DISABLED), the SET CHLAUTH
command will only be partially successful. In this instance, SET
CHLAUTH will update the in-memory cache, but will fail when hardening.
This means that although the rule put in place by the SET CHLAUTH
command may be operable initially, the effect of the command will not
persist over a queue manager restart. The user should investigate,
ensuring the queue is accessible and then reissue the command (using
ACTION(REPLACE) ) before cycling the queue manager.
If the SYSTEM.CHLAUTH.DATA.QUEUE remains inaccessible at queue manager
startup, the cache of saved rules cannot be loaded and all channels
will be blocked until the queue and rules become accessible.
In Summary, each time you add, change, or delete a CHLAUTH rule, the queue manager updates does two things:
It updates the in-memory cache (running configuration)
It hardens the configuration by adding, updating, or removing messages in the SYSTEM.CHLAUTH.DATA.QUEUE. This is so that the running configuration will be available when the queue manager is restarted.
When the queue manager is restarted, it reads the messages from the SYSTEM.CHLAUTH.DATA.QUEUE to initially populate the in-memory cache (running configuration) with previously existing rules.
If you were to delete the messages from this queue and restart your queue manager you would find there would be no CHLAUTH records set.
A similar queue exists called SYSTEM.AUTH.DATA.QUEUE which holds the queue manager's OAM (authorization) rules. One difference between the CHLAUTH queue and this queue is that the AUTH queue is opened by an internal MQ process with MQOO_INPUT_EXCLUSIVE which means you can not open the queue at all.
Note that CHLAUTH was added at MQ v7.1. If a queue manager is created new under 7.1 or higher CHLAUTH will be ENABLED by default. If a queue manager is upgraded to MQ v7.1 or higher from a version prior to 7.1 then CHLAUTH will be DISABLED by default. No matter if it a new or upgraded queue manager, or if CHLAUTH is ENABLED or DISABLED, there are three default rules that will be in place (listed below).
BLOCKUSER rule to deny any MQADMIN user on all SRVCONN channels.
ADDRESSMAP rule to deny usage of any channel that starts with SYSTEM.* from any IP address.
ADDRESSMAP rule to allow connections to SYSTEM.ADMIN.SVRCONN from any IP address. Rule #1's restriction still applies.
The three default rules are likely directly related to the three messages you observed in the queue. In general leaving CHLAUTH ENABLED with the default rules is a good thing. I normally get rid of #3 because I do not have a channel with this name. You noted that CHLAUTH is disabled, if you have no intention of using this feature you could use saveqmgr or dmpmqcfg to dump MQSC commands to recreate these three default rules and then delete those three rules, this will remove the three messages on the SYSTEM.CHLAUTH.DATA.QUEUE.
If in the future you come to your senses and turn CHLAUTH back on, you can restore the rules you deleted with the backup created.
Related
Need to have two outbound queues on two different servers and queue managers act as primary and secondary respectively if the sending to primary fails want to connect to secondary but as soon as primary become up application must send to primary any spring boot configuration can help? using websphere MQ.
For a Java JMS messaging application in Spring Boot, a connection name list allows for setting multiple host(port) endpoints as comma separated pairs.
The connection factory will then try these endpoints in turn to finds an available queue manager. Unsetting WMQConstants.WMQ_QUEUE_MANAGER means the connection factory will connect to the available queue manager on a given host(port) endpoint regardless of its name.
You'll need to think about the lifecycle of the connection factory as the connection will remain valid once it has been established. In the scenario where the connection factory is bound to the queue manager on hostB(1414) (the second in the list) and then the queue manager on hostA(1414) (the first in the list) becomes available again, nothing would change until the next connection attempt.
It's important to note that where the queue manager endpoints in the connection name list are unrelated, then the queues and messages available will not be the same. Multi-Instance queue managers allow a queue manager instance to failover between two host(port) endpoints. For container deployments, IBM MQ Native HA ensures your messaging applications are routed to an active instance of the queue manager
A CCDT allows for much more sophisticated connection matching options with IBM MQ over the connection name list method outlined above. More information on CCDT is available here. For JMS applications you'll need to set the connection factory to use CCDT e.g., cf.setStringProperty(WMQConstants.WMQ_CCDTURL, "") to the location of the CCDT JSON file.
It's probably worth me putting up an answer rather than more comments, though most of the ground has been covered by Rich's answer, augmented by Rich's, Morag's and my comments.
Client Reconnection seems the most natural fit for the first part of the use-case, and I'd suggest using a connection name list rather than going to the complexity of using a CCDT for this job. This will ensure that the application connects to your primary queue manager if it's available and the secondary queue manager if the primary isn't available, and will also move applications to the secondary if the primary fails.
Once a connection has been made, in general it won't move. The two exceptions are that in client reconnection configuration, a connection will be migrated to a different queue manager if communication to the first is broken, and in the context of uniform clusters connections may be requested to migrate to a queue manager to balance load over the cluster. There is no automatic mechanism that I can think of which would force all the connections back from your secondary queue manger to the primary - you'd need to either do something with the application, or in a client reconnect setup you could bounce the secondary queue manager.
(You could use this forum to request such a feature, but I can't promise either that the request would be accepted or that it would be rapidly acted on.)
If you want to discuss this further, I'd suggest that more dialogue is probably needed to help us understand your scenario properly, so that we can make the most helpful recommendations. The MQ discussion forum may be a useful place for such dialogue, as it doesn't fit well (IMHO) to the StackOverflow model.
We are using IBM MQ and recently we faced an issue where some messages that were declared as sent to the MQ server by our client application were not consumed by our MQ consumer.
We lacked logging produced/consumed messages so we tried to check messages in MQ server log/data.
We found that messages are stored in /var/mqm/qmgrs/MQ_MANAGER/queues/ but we didn't find there all messages in the queue file (old messages were not found)
What is the rollover policy of IBM MQ and where does old queues files go?
That's not how the queue files work. They are not rollover logs. The same space is continually overwritten as needed to store messages, but messages may not be written there at all if they can be processed through memory caches etc.
PERSISTENT messages are usually logged in files under /var/mqm/log, but there are circumstances where even that can be avoided. Your qmgr's recovery logfile configuration (circular/linear etc) will determine whether historic information about PERSISTENT messages remains available.
NONPERSISTENT messages are never logged in those files.
In IBM MQ messages can be either persistent or non-persistent.
If a message is persistent it will normally be written to the transactional logs (usually under /var/mqm/log/MQ_MANAGER/active) before a commit completes or before the PUT completes if not done under a unit of work.
If a message is non-persistent it will not be written to the transactional logs.
At this point either type of message may reside only in memory and will only be written to the queue file (usually under /var/mqm/qmgrs/MQ_MANAGER/queues) if it needs to offload memory or if it is persistent and a check point is taken.
If the message is consumed in a timely manner it may never be written to the queue file.
The queue file will shrink in size if space taken up by messages that are no longer needed is in use, this happens automatically and is not configurable or documented by IBM as far as I know.
Non-persistent messages generally do not survive a queue manager restart.
Transactional logs can be configured as circular or linear. If circular the logs will be reused once they are no longer needed. If linear with automatic log management (introduced in 9.0.2) they will work similarly to circular. If linear without automatic log management, what happens to logs that are no longer needed would be based on your own log management.
If the message is still in the transactional log you may be able to view it as described in "Where's my message? Tool and instructions to use the MQ recovery log to find out what happened to your persistent MQ messages on distributed platforms".
We are using IBM MQ8.0. Activitiy logs are getting logged for outgoing messages which we are sending to external system. But there is no log available for the messages which are from external system to our MQ Manager.
Is it problem with client channel configuration ?
Or MQ logging configuration issue ?
IBM describes these "activity logs" as recover logs in the Knowledge center page "Making sure that messages are not lost (logging)"
IBM MQ records all significant changes to the persistent data controlled by the queue manager in a recovery log.
This includes creating and deleting objects, persistent message updates, transaction states, changes to object attributes, and channel activities. The log contains the information you need to recover all updates to message queues by:
Keeping records of queue manager changes
Keeping records of queue updates for use by the restart process
Enabling you to restore data after a hardware or software failure
Please note that non-persistent messages are not logged to the recover log.
Based on your question it is likely that the messages you are sending to the external system are persistent messages and the messages you are receiving from the external system are non-persistent messages, this would explain why they are not logged to the recover log files.
Persistence is determined at the time the message is first PUT.
IBM has a good Technote "Message persistence FAQs" about this subject.
Q3. What is the best way to be certain that messages are persistent?
A3. Set MQMD message persistence to persistent (MQPER_PERSISTENT), or nonpersistent (MQPER_NOT_PERSISTENT) and your message will always retain that value.
Note: MQPER_PERSISTENCE_AS_Q_DEF is the default setting for the persistence value in the MQMD. See the persistence values listed below.
...
Additional information
MQPER_PERSISTENCE_AS_Q_DEF can lead to unexpected results. If there is more than one definition in the queue-name resolution path, the default persistence attribute is taken from first queue definition in the path at the time of the MQPUT or MQPUT1 call. This queue could be an:
alias queue
local queue
local definition of a remote queue
queue-manager alias
transmission queue
cluster queue
The external system will need to make sure the messages they send you are set as persistent messages if you want them to be logged.
I have to check the IBM MQ queue manager status before opening a queue.
I have to create requestor app by checking that the QMgr is active or not then call put msg or get message from MQ
Is it possible to check the status,
please share some code snippets.
Thanks
You should NEVER have to check the QMgr before opening a queue. As I responded to a similar question today, the design proposed is a very, VERY bad design. The effect is to turn async messaging back into synchronous messaging. This couples message producers to consumers, introduces location and resolution dependencies, breaks clustering, defeats WMQ's load distribution and balancing, embeds network topology into the application, and makes the whole system brittle. Please do not blame WMQ for not working correctly after intentionally defeating all its best features except the actual queue/dequeue operations.
If your requestor app is checking that the QMgr is active, you are much better off using a multi-instance connection name and a layer of two or more functionally equivalent QMgrs that can access the cluster. So long as one of the QMgrs is up, the app will cycle between them until it finds one at which to connect.
If your responder app is checking that the QMgr is active, you are much better off just attempting to connect. Responder apps should never fail over to a different QMgr since doing so breaks transactionality and may leave queues unserviced. Instead just ensure that each queue has at least two input handles from local responder apps that do not fail over across QMgrs. (It is OK if the QMgr itself fails over using hardware clustering or multi-instance QMgr though).
If the intent is to check that there's an open input handle on the queue before putting messages there a better design is to have the requesting app not care to which queue instance the messages are routed and instead use the instrumentation built into WMQ to either restart responder apps that lose their input handle, or to disable the queue when nothing's listening.
I have two MQ queue manager with same queue names configured. Both are configured to send data to different servers. Currently queue manager(QM1) is stopped(status Ended Immediately) and QM2 is running
Now my program opens the queue and sends data. It doesnot specify queue manager name. When I execute the program, MQ connection request returns error 2059.
My questions are:
What happens when multiple queue managers have same queue name?
How to tackle situation without changing the code?
Please forgive if the description is vague. It would be helpful if anyone provide links so that newbie like me can learn something.
Thanks
It would be helpful if could provide details on your application. Whether it's using server bindings or client mode connection to queue manager. What version of MQ are you using?
The below information is valid for MQ v7.x:
If you are using client mode then you can use multiple CONNNAMEs to connect. If one queue manager is down, your application will connect to next queue manager in CONNAME list. One of the simplest way to do when using client mode connection is to define MQSERVER environment variable and specify multiple CONNNAMEs.
SET MQSERVER=<channel name>/TCP/host1(port1), host2(port2)
For example when both queue managers are on local host:
SET MQSERVER=MYSVRCONCHN/TCP/localhost(1414),localhost(1415)
In server bindings mode if queue manager name is not specified, then application will attempt to connect to the default queue manager. If the default queue manager is down, then 2059 is thrown.
Your explaination doesn't provide clarity about your requirements.
You wrote:
My questions are 1. What happens when multiple queue managers have same queue name.
Nothing. Its a normal scenario. Different queue managers may have queues with same name and it doesn't create any ambiguity. Although, scenario will be a little different when the queue managers are in same cluster and the queue is also a cluster queue. Then everything will depend on requirements and design.
You wrote:
2. How to tackle situation without changing the code
Run the queue manager which is stopped.
You wrote:
Now my program opens the queue and sends data. It doesnot specify
queue manager name.
What application are you using?For a client application, you access a queue using an object of queue manager.
I am asssuming that you are using an application(client) which doesn't take queue manager details from you, only takes queue details. And may be the queue manager is hard coded within the code. And it sends the message first to the queue of Queue manager 1 and then to queue manager 2. But, in your case queue manager 1 is down.
If above is the case, then the application's code needs to be changed. You should have exception handling in such a way that it executes the code for sending the message to the second queue manager even though the first lines of code throws error.