MQ Explorer stops displaying most information - ibm-mq

After using MQ Explorer 7.5 on Ubuntu 12 to add JMS Connection factories and a JMS Destination it decided to stop displaying my two queues and subsidiary info as well as the new JMS information. I tried a few things to get it to work again: stopping the queue manager/restarting, rebooting etc. even reinstalling MQ Explorer without any luck.
I can do a status on the "empty" queues folder and it then shows me my two queues; each has "queue monitoring" set as off. Is this relevant? Can I set it on ?
Am I stuck with MQ Explorer to display and manage the JMS objects (there doesn't seem to be any documentation about how to use the command line for JMS objects) ?
more detail:
so I created objects using the following:
DEFINE QLOCAL (QUEUE_FROM)
DEFINE QLOCAL (QUEUE_TO)
SET AUTHREC PROFILE(QUEUE_FROM) OBJTYPE(QUEUE) PRINCIPAL('bsmith') AUTHADD(PUT,GET)
SET AUTHREC PROFILE(QUEUE_TO) OBJTYPE(QUEUE) PRINCIPAL('bsmith') AUTHADD(PUT,GET)
SET AUTHREC OBJTYPE(QMGR) PRINCIPAL('bsmith') AUTHADD(CONNECT)
DEFINE CHANNEL (CHANNEL1) CHLTYPE (SVRCONN) TRPTYPE (TCP)
SET CHLAUTH(CHANNEL1) TYPE(ADDRESSMAP) ADDRESS('127.0.0.1') MCAUSER('bsmith')
DEFINE LISTENER (LISTENER1) TRPTYPE (TCP) CONTROL (QMGR) PORT (1415)
START LISTENER (LISTENER1)
So these were all visible then in MQ Explorer using a user that was part of group mqm.
I then added, using MQ Explorer, a file based JMS context, two JMS Connection Factories, and a JMS Destination. After adding the JMS Destination the MQ Explorer stopped displaying everything except the Queue Manager and the JMS context in the MQ Explorer UI.
if I try to start the LISTENER again using the command START LISTENER (LISTENER1) it will tell me that it is already started. When I add a new queue to the queue manager using a command it also is not visible on the UI. A refresh doesn't change this.
/etc/environment is set to:
export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45
export MQSERVER="SWI_CHANNEL/TCP/COM22189(1415)"
export MQ_JAVA_LIB_PATH=/opt/mqm/java/lib64
export MQ_JAVA_INSTALL_PATH=/opt/mqm/java
export MQ_JAVA_DATA_PATH=/var/mqm
export LD_LIBRARY_PATH=/opt/mqm/java/lib64
CLASSPATH=.:/opt/mqm/java/lib/com.ibm.mq.jar:/opt/mqm/java/lib/com.ibm.mqjms.jar:/opt/mqm/samp/wmqjava/samples:/opt/mqm/samp/jms/samples:${JAVA_HOME}:${MQ_JAVA_LIB_PATH}:${CLASSPATH}
PATH=".:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:${JAVA_HOME}/bin:${JAVA_HOME}:/usr/lib/jvm/jdk1.6.0_45/jre/bin:${MQ_JAVA_LIB_PATH}"
trying the JMS Admin tool suggested gives :
/opt/mqm/java/bin$ ./JMSAdmin -v
Licensed Materials - Property of IBM 5724-H72, 5655-R36, 5724-L26,
5655-L82 (c) Copyright IBM Corp. 2008, 2011 All Rights Reserved. US
Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp. Starting
WebSphere MQ classes for Java(tm) Message Service Administration
Initializing JNDI Context... INITIAL_CONTEXT_FACTORY:
com.sun.jndi.fscontext.RefFSContextFactory PROVIDER_URL:
file:/C:/JNDI-Directory JNDI initialization failed, please check your
JNDI settings and service. The name '"/C:/JNDI-Directory"' cannot be
resolved
Error: javax.naming.NameNotFoundException; remaining name
'"/C:/JNDI-Directory"

The error Error: javax.naming.NameNotFoundException; remaining name '"/C:/JNDI-Directory" can be resolved by creating a folder named JNDI-DIRECTORY in C drive. This is the place where .bindings file will get generated.

Related

MQRC_UNKNOWN_ALIAS_BASE_Q when connecting with IBM MQ cluster using CCDT and Spring Boot JMSTemplate

I have a Spring Boot app using JMSListener + IBMConnectionFactory + CCDT for connecting an IBM MQ Cluster.
A set the following connection properties:
- url pointing to a generated ccdt file
- username (password not required, since test environment)
- queuemanager name is NOT defined - since it's the cluster's task to decide, and a few google results, including several stackoverflow ones indicate that in my case qmgr must be set to empty string.
When my Spring Boot JMSListener tries to connect to the queue, the following MQRC_UNKNOWN_ALIAS_BASE_Q error occurs:
2019-01-29 11:05:00.329 WARN [thread:DefaultMessageListenerContainer-44][class:org.springframework.jms.listener.DefaultMessageListenerContainer:892] - Setup of JMS message listener invoker failed for destination 'MY.Q.ALIAS' - trying to recover. Cause: JMSWMQ2008: Failed to open MQ queue 'MY.Q.ALIAS'.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2082' ('MQRC_UNKNOWN_ALIAS_BASE_Q').
com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'MY.Q.ALIAS'.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:513)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
In the MQ error log I see the following:
01/29/2019 03:08:05 PM - Process(27185.478) User(mqm) Program(amqrmppa)
Host(myhost) Installation(Installation1)
VRMF(9.0.0.5) QMgr(MyQMGR)
AMQ9999: Channel 'MyCHL' to host 'MyIP' ended abnormally.
EXPLANATION:
The channel program running under process ID 27185 for channel 'MyCHL'
ended abnormally. The host name is 'MyIP'; in some cases the host name
cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 938 --------------------------------------------------------
01/29/2019 03:15:14 PM - Process(27185.498) User(mqm) Program(amqrmppa)
Host(myhost) Installation(Installation1)
VRMF(9.0.0.5) QMgr(MyQMGR)
AMQ9209: Connection to host 'MyIP' for channel 'MyCHL' closed.
EXPLANATION:
An error occurred receiving data from 'MyIP' over TCP/IP. The connection
to the remote host has unexpectedly terminated.
The channel name is 'MyCHL'; in some cases it cannot be determined and so
is shown as '????'.
ACTION:
Tell the systems administrator.
Since the MQ error log contains QMgr(MyQMGR), which MyQMGR value I did not set in the connection properties, I assume the routing seems to be fine: the MQ Cluster figured out a qmgr to use.
The alias exists and points to an existing q. Bot the target q and the alias are added to the cluster via the CLUSTER(clustname) command.
What can be wrong?
Short Answer
MQ Clustering is not used for a consumer application to find a queue to GET messages from.
MQ Clustering is used when a producer application PUTs messages to direct them to a destination.
Further reading
Clustering is used when messages are being sent to help provide load balancing to multiple instances of a clustered queue. In some cases people use this for hot/cold failover by having two instances of a queue and keeping only one PUT(ENABLED).
If an application is a producer that is putting messages to a clustered queue, it only needs to be connected to a queue manager in the cluster and have permissions to put to that clustered queue. MQ based on a number of different things will handle where to send that message.
Prior to v7.1 there was only two ways to provide access to remote clustered queues:
Using a QALIAS:
Define a local QALIAS which has a TARGET set to the clustered queue name
Note this QALIAS does not itself need to be clustered.
Grant permission to put to the local QALIAS.
Provide permissions to PUT to the SYSTEM.CLUSTER.TRANSMIT.QUEUE.
The first option allows for granting granular access to an application for specific clustered queues in the cluster. The second option allows for the application to put to any clustered queue in the cluster or any queue on any clustered queue manager in the cluster.
At 7.1 IBM added a new optional behavior, this was provided with the setting ClusterQueueAccessControl=RQMName in the Security stanza of the qm.ini. If this is enabled (it is not the default), then you can actually provide permission for the app to PUT to the remote clustered queues directly without the need for a local QALIAS.
What clustering is not for is consuming applications such as your example of a JMSListener.
An application that will consume from any QLOCAL (clustered or not) must be connected to the queue manager where the QLOCAL is defined.
If you have a situation where there are multiple instances of a clustered QLOCAL that are PUT(ENABLED), you would need to ensure you have consumers connected directly to each queue managers that an instance is hosted on.
Based on your comment you have a CCDT with an entry such as:
CHANNEL('MyCHL') CHLTYPE(CLNTCONN) QMNAME('MyQMGR') CONNAME('node1url(port1),node2url(port2)')
If there are two different queue managers with different queue manager names listening on node1url(port1) and node2url(port2), then you have different ways to accomplish this from the app side.
When you specify the QMNAME to connect to the app will expect the name to match the queue manager you connect to unless it meets one of the following:
If you specify *MyQMGR it will find the channel or channels with QMNAME('MyQMGR') and pick one and connect and will not enforce that the remote queue manager name must match.
If in your CCDT you have QNAME(''), it is set to NULL, then in your app you can specify a empty queue manager name or only a space and it will find this entry in the CCDT and will not enforce that the remote queue manager name must match.
In your app you specify the queue manager name as *, MQ will use any channel in the CCDT and will not enforce that the remote queue manager name must match.
One limitation of CCDT is that channel name must be unique in the CCDT. Even if the QMNAME is different you can't have a second entry with the same channel name.
When you connect you are hitting the entry with two CONNAME's and getting connected to the first IP(port), you would only get to the second IP(port) if at connect time the first is not available, MQ will try the second, or if you are connected and have RECONNECT enabled and then the first goes down MQ will try to connect to the first then second.
If you want to have both clustered queue PUT(ENABLED) to receive traffic then you want to be able to specifically connect to each of the two queue managers to read those queues.
I would suggest you add a new channel on each queue manager that has a different QM specific name that is also different from the existing name, something like this:
CHANNEL('MyCHL1') CHLTYPE(CLNTCONN) QMNAME('MyQMGR1') CONNAME('node1url(port1)')
CHANNEL('MyCHL2') CHLTYPE(CLNTCONN) QMNAME('MyQMGR2') CONNAME('node2url(port2)')
This would be in addition to the existing entry.
For your putting components you can continue to use the channel that can connect to either queue manager.
For your getting components you can configure at least two of them, one to connect to each queue manager using the new queue manager specific CCDT entries, this way both queues are being consumed.

Disk full made MQ dead

We have an application that uses WebSphere MQ 7.0.1.3. During extensive testing in our stage environment, the disks became full.
After this, the MQ is hanging. We removed the application logs (not related to MQ) and added more disk but it didn't solve the problem.
We tried to restart the queue manager:
$ endmqlsr
$ endmqm XYZ
$ strmqm XYZ
WebSphere MQ queue manager 'XYZ' starting.
WebSphere MQ was unable to display an error message 893.
The logs from the time when the disk became full and the error occurred:
----- amqxfdcx.c : 828 --------------------------------------------------------
06/08/2018 03:36:44 AM - Process(8832.5) User(mqm) Program(amqzlaa0)
AMQ6119: An internal WebSphere MQ error has occurred (Rc=28 from write)
----- amqxfdcx.c : 783 --------------------------------------------------------
06/08/2018 03:36:44 AM - Process(8832.5) User(mqm) Program(amqzlaa0)
AMQ6184: An internal WebSphere MQ error has occurred on queue manager XYZ.
----- amqxfdcx.c : 822 --------------------------------------------------------
06/08/2018 03:36:46 AM - Process(8832.5) User(mqm) Program(amqzlaa0)
AMQ6119: An internal WebSphere MQ error has occurred (Rc=28 from write)
----- amqxfdcx.c : 783 --------------------------------------------------------
06/08/2018 03:36:46 AM - Process(8832.5) User(mqm) Program(amqzlaa0)
AMQ6184: An internal WebSphere MQ error has occurred on queue manager XYZ.
AMQ6119: An internal WebSphere MQ error has occurred ('28 - No space left on device' from semget.)
----- amqxfdcx.c : 783 --------------------------------------------------------
06/14/2018 02:35:46 PM - Process(6794.1) User(mqm) Program(amqzxma0)
AMQ6184: An internal WebSphere MQ error has occurred on queue manager XYZ.
----- amqxfdcx.c : 822 --------------------------------------------------------
06/14/2018 02:35:46 PM - Process(6794.1) User(mqm) Program(amqzxma0)
AMQ6118: An internal WebSphere MQ error has occurred (20006037)
When trying to connect with the IBM WebSphere MQ Explorer
Queue manager not available for connection - reason 2059. (AMQ4043)
Severity: 20 (Error)
Explanation: The attempt to connect to the queue manager failed. This could be because the queue manager is incorrectly configured to allow a connection from this system, or the connection has been broken.
Response: Ensure that the queue manager is running. If the queue manager is running on another computer, ensure it is configured to accept remote connections.
Is there a way of clearing all messages from the queues and resetting all flags so the queue manager will start and the queues will work again?
There are only old test data in the queues, nothing of value.
Or do you have any other suggestions on how to fix this?
You can use the mqrc command to provide more information on errors. Most of the time MQ reports return codes as a four digit decimal number. In this case since the return code is three digits it usually (always?) means it is a HEX return code.
$ mqrc 2195
2195 0x00000893 MQRC_UNEXPECTED_ERROR
This error is thrown when MQ hits an error condition that was not expected. Usually you will find a FDC file was created in the /var/mqm/errors directory that could provide some more detail.
The best course of action when you receive this type of error is to open a PMR with IBM and have them provide direction on recovery to ensure you have the best chance of preserving messages that may be present on your queues, however you are using a version of MQ (7.0) that has been out of support since September 30th 2015. The specific Fix Pack you are on (7.0.1.3) was released in August 2010. The last release of v7.0 from IBM was 7.0.1.14 in August 2016.
If you pay IBM for extended support you may be able to open a PMR with them for futher support.
The best path forward once you have resolved your issue would be to migrate to a supported version of IBM MQ. Currently v8.0 and v9.0 are the only supported versions of IBM MQ at this time.
Assuming you do not have extended support and are unable to get assistance from IBM, the following are some suggested steps:
Updating even to the latest Fix Pack (7.0.1.14) may help, and if it does not solve the problem it is still better by be at the latest Fix Pack of a unsupported version of IBM MQ.
You could try to cold start your queue manager and see if that helps. This is documented starting on Page 4 of the presentation "WebSphere MQ Disaster Recovery" given by Mark Taylor at Capitalware's MQ Technical Conference v2.0.1.3.
Create a queue manager EXACTLY like the one that failed
Use qm.ini to work out parameters to crtmqm command
Log:
LogPrimaryFiles=10
LogSecondaryFiles=10
LogFilePages=65535
LogType=CIRCULAR
Issue the crtmqm command
crtmqm -lc -lf 65535 -lp 10 -ls 10 –ld /tmp/mqlogs TEMP.QMGR
Make sure there is enough space for the new log files in that directory
Name of the dummy queue manager is irrelevant
Only care about getting the log files
Don’t start this dummy queue manager, just create it
Replace old logs and amqhlctl.lfh with the new ones
cd /var/mqm/log
mv QM1 QM1.SAVE
mv /tmp/mqlogs/TEMP!QMGR QM1
Note the “mangled” directory name … this is normal
Data in the queues is preserved if messages are persistent
Object definitions are also preserved
Objects contain their own definitions in their files
Mapping between files and object names held in QMQMOBJCAT
Once all the above is complete then try and start your queue manager.

Websphere mq listener available but showing not found error

we have facing error, application unable to connect to queue manager,with reason
code mqrc 2538,
webspher MQ version v7.0.1.2.
operating system "Solaris".
I have started the listener manually through
runmqlsr -m qmname -t tcp -p port
after i have checked status of listener through command,
display lsstatus(listener name)
"listener is available but when I try to display the status of this listener it is showing MQ object not found."
we have checked error logs but there is no information for related client fails we have started listener manually, listener information only available in error logs.
Also we have checked "/var/mqm/error" we found the FDC files "probe ID: XY132002" we have contact with sysadmin they mount the disk space.
After mounting /var/mqm/ disc space still we are facing the same issue.
i have already given "start lstr(lstr name)" in script mode, but i its accepting the request, while I try to display the status of this listener it is showing MQ object not found."
i have checked qmgr error logs and fdc error logs"
can you please find the below errors written in /var/mqm/errors/AMQERR01.LOG
Explanation: 1. An attempt hasbeen made to run the brker(SFMSICREQMGR) but the brker has ended for reason '6119:xecF_E_UNEXPECTED_SYSTEM_RC'.
error: AMQ6119:An internal WebSphere MQ error has occured(failed to get memory segment:shmget(0x00000000, 16384) [rc =1 errno=28] no space left on device.
++below error written in queue manger level error:++
AMQ5008: An essential websphere MQ process 10063 (amqfgpub) cannot be found is assumed to be terminated.
these are errors written in queue manager level error logs and system level error logs:
we have added below values
process.max-file-descriptor=(basic,10000,deny)
project.max-sem-ids=(priv,1024,deny)
project.max-shm-ids=(priv,1024,deny)
project.max-shm-memory=(priv,4294967296,deny)
after adding this parameters we restarted the queue manager's,
we have four queue managers in server, three queue managers and listeners are in running state, fourth queue manager facing same error.
we have stopped one queue manager and we have run the fourth queue manager,the fourth queue manager is running and listener also in running state.
one queue manager is not allowing to start. we are facing same error for this queue manager.
All queue managers and listeners running fine.
we have created local queue,
queue name(error_local_queue).but while application tried get msg from this queue his getting error
Mqrc 2033.
Kindly help for this issue
thank you so much to all issue got resolved.
If you start a listener using the following command (as per your question):-
runmqlsr -m qmname -t tcp -p port
Then you have not specified a name for the listener anywhere (because this command does not have that capability).
It will however still show up in a DISPLAY LSSTATUS command with a system generated name. If you use the following command:-
DISPLAY LSSTATUS(*)
that will show all running listeners, and you will see that there is one with a name something like SYSTEM.LISTENER.TCP.1 which is your runmqlsr one.
Alternatively, if you want to give your listener a specific name, then you must define a listener as follows (replacing nnnn with your port number):-
DEFINE LISTENER(TCP.LSTR) TRPTYPE(TCP) CONTROL(QMGR) PORT(nnnn)
Then you are able to start it as follows:-
START LISTENER(TCP.LSTR)
and show it's status as follows:-
DISPLAY LSSTATUS(TCP.LSTR) ALL
N.B. I used the name TCP.LSTR but you may choose any name you wish.
The errors you mention at the end of your question are unrelated to listeners. Please open a separate question for those.
MQ v7.0 has been out of support since September 30th 2015.
The errors you found indicate the queue manager is short on shared memory, this could cause the entire queue manager to have issues including your listener. The current values along with IBM's recommendations can by found using the mqconfig script.
MQ v7.0 did not come with the mqconfig script. Download the script and verify which kernel settings are not correct, the download site is "How to configure UNIX and Linux systems for IBM MQ".
You can find more information on setting these in the IBM MQ v7 Knowledge Center page "Resource limit configuration".
The values in the Knowledge center are recommended values for a average server with a couple of queue managers and should be treated as a minimum value. If you can't run 4 queue managers then I would suggest going to higher values. I would start with setting max-sem-ids and max-shm-ids to 10240 and see if that solves it, if not then attempt to add 50% to the max-shm-memory value.

How to Connect to a Remote Queue Manager with IBM WebSphere MQ Explorer?

I create a queue manager like QMTEST in IBM WebSphere MQ Explorer. I want to connect to a remote queue manager (remote ip address). I followed these steps:
add remote Queue manager
Queue manager name: QMTEST [Next]
Host Name or ip Address: X.X.X.X(remote ip) [Finish]
But I couldn't connect. I got this error message 'Could not establish connection to the queue manager-reason 2538.(AMQ4059)'. What can I do?
The four digit number in your error message is an MQRC (MQ Reason Code). This number gives you more information about what went wrong. You can look it up in Knowledge Center.
MQRC_HOST_NOT_AVAILABLE (2538)
There is a list of possible things that could cause this error. My guess is it is likely to be the first one, you have not started a listener on the queue manager, since you do not mention doing that in your question details.
You should also read the following link which is some basic details on how to connect to a remote queue manager. You appear to have the MQ Explorer side sorted, but perhaps not the queue manager side.
Setting up the server using the command line
Please ensure the listener is running on the remote queue manager side. The default MQ listener port is 1414. If the listener is running then check the queue manager error logs for any connection errors from the MQ explorer.
Are you sure the qmgr and its listener are running and that you have a SYSTEM.ADMIN.SVRCONN channel? That is the server-connection channel used for remote administration of a queue manager. This technote may be helpful.
Is this on a modern Windows or Linux server? If so, did you open the port (i.e. 1414) in the firewall?
Sometimes when we create a Queue manager remote administration objects are not created that's why we get such errors because it is unable to find these objects. To create them right click on the queue Manager, Select Remote Administration objects and create them and start the listener.
I experienced this same error and the queue was configured correctly.
I was using Eclipse and switched to the MQ Explorer setup available from IBM web page.
After that, I was totally able to see the queues and everything I was supposed to see.

IBM Cast Iron: MQ Put activity issues

I am trying to put a message into Websphere MQ queue from an Orchestration which is deployed on Cast Iron Live. I have used secure connector since the orchestation is deployed on Cast Iron. When I am trying to execute the flow, it fails and the message is not placed in MQ queue. The below are the errors:
Error while trying to call remote operation execute on Secure Connector for activity
com.approuter.module.mq.activity.MqPut and Secure Connector LocalSecureConnector,
error is Unable to put message on queue null. MQ returned error code 2538.
Unable to put message on queue null. MQ returned error code 2538.
Fault Name : Mq.Put.OperationActivityId : 163
Message: Unable to put message on queue null. MQ returned error code 2538.
Activity Name:Put MessageFault Time: 2015-07-15T05:40:29.711Z
Can someone please help me resolve this. Please let me know if any further details are required.
Here are the details:
Cast Iron flow is deployed on Cast Iron Cloud i.e Cast Iron Live
MQ is running on-premise
The port I am trying to connect is 1414.
Have a secure connector running on the machine where MQ is installed.
MQ version is 8.
In Cast Iron flow, I am using an MQ connector, by giving the hostname where MQ is running, port: 1414, Channel Name : SYSTEM.DEF.SVRCONN and username as mqm. Tired using my log on username, by adding it to mqm group. But this also dosent seem to work.
The return code is instructive:
2538 0x000009ea MQRC_HOST_NOT_AVAILABLE
This indicates that Cast Iron is attempting to contact MQ using a client connection and not finding a listener at the host/port that it is using.
There are a couple of possibilities here but not enough info to say which it might be. I'll explain and provide some diagnostics you can try.
The 2538 indicates an attempt to contact the QMgr has failed. This might be that, for example, the QMgr isn't listening on the configured port (1414) or that the MQ listener is not running.
The error code says the queue name is "null". The question doesn't specify which queue name the connector is configured with but presumably it's been configured with some queue name. This error code suggests the Secure Connector on the MQ server side doesn't have its configuration installed.
The Cast Iron docs advise connecting with an ID in the mqm group but do not mention that on any MQ version 7.1 or higher this is guaranteed to fail unless special provisions are made to allow the admin connection. It may be that it's actually failing for an authorization error and the connector not reporting the correct error.
If it is as simple as the listener not running, that's easy enough to fix. Just start it and make sure it's on 1414 as expected.
Next, ensure that the Secure Connector has the configuration that was created using the Cast Iron admin panel. You need to understand why the error code says the queue name is null.
Now enable Authorization Events and Channel Events in the QMgr and try to connect again. The connector on the MQ server should connect when started and if successful you can see this by looking at the MQ channel status. However, if unsuccessful, you can tell by looking at the event messages or the MQ error logs. Both of these will show authorization failures and connection attempts, if the connection has made it that far.
The reason I'm expecting 2035 Authorization Error failures is that any QMgr from v7.1 and up will by default allow an administrative connection on any channel. This is configured in the default set of CHLAUTH rules. The intent is that the MQ admin would have to explicitly provision admin access by adding one or more new CHLAUTH rules.
For reasons of security SYSTEM.DEF.* and SYSTEM.AUTO.* channels should never be used for legitimate connections. The Best Practice is to define a new SVRCONN, for example one named CAST.IRON.SVRCONN and then define a CHLAUTH rule to allow the administrative connection.
For example:
DEFINE CHL(CAST.IRON.SVRCONN) CHLTYPE(SVRCONN) TRPTYPE(TCP) REPLACE
SET CHLAUTH('CAST.IRON.SVRCONN') TYPE(ADDRESSMAP) +
ADDRESS('127.0.0.1') +
USERSRC(MAP) MCAUSER('mqm') +
ACTION(REPLACE)
SET CHLAUTH('CAST.IRON.SVRCONN') TYPE(BLOCKUSER) +
USERLIST('*NOBODY') +
WARN(NO) ACTION(REPLACE)
The first statement defines the new channel.
The next one allows the connections from 127.0.0.1 which is where the Secure Connector lives. (Presumably you installed the internal Secure Connection on the same server as MQ, yes?) Ideally the connector would use TLS on the channel and instead of IP filtering the CHLAUTH rule would filter based on the certificate Distinguished Name. This rule is not nearly so slective and allows anyone on the local host to be an MQ administrator by using this channel.
The last statement overrides the default CHLAUTH rule which blocks *MQADMIN with a new rule that blocks *NOBODY but just for that channel.

Resources