check chlauth for a MQ channel - ibm-mq

I want to check if any chlauth method is deployed for a channel or not in my MQ. So I run this command:
dis chlauth(MY.CHANNEL.NAME)
But I've got this message:
AMQ8884: Channel authentication record not found.
So, does it mean that I run a wrong command or it means that this channel has no channel authentication mechanism?

Since CHLAUTH rules can be put in place either with fully spelled out channel names, or with wild carded profiles, there are a few different ways to display the CHLAUTH rules in the system.
In this case the best way to determine whether there is a rule that will apply to your channel when it runs is to use the following command:
DISPLAY CHLAUTH(MY.CHANNEL.NAME) +
MATCH(RUNCHECK) +
ADDRESS(IP-address) +
CLNTUSER(client-side-UserID)
You can read more about this approach in I'm being blocked by CHLAUTH - how can I work out why?

Related

Can we delete the messages from SYSTEM.CHLAUTH.DATA.QUEUE

Could you please explain why there would be messages pending in the queue SYSTEM.CHLAUTH.DATA.QUEUE. There are 3 messages in the queue now.
what if these messages are deleted. Will there be any issues if we delete these messages.
Are those information messages about the channel authentication records?
Please suggest a resolution.
The IBM MQ v7.5 Knowledge Center page "Troubleshooting channel authentication records addresses the topic of what the SYSTEM.CHLAUTH.DATA.QUEUE is used for.
Behaviour of SET CHLAUTH command over queue manager restart
If the SYSTEM.CHLAUTH.DATA.QUEUE, has been deleted or altered in a way
that it is no longer accessible i.e. PUT(DISABLED), the SET CHLAUTH
command will only be partially successful. In this instance, SET
CHLAUTH will update the in-memory cache, but will fail when hardening.
This means that although the rule put in place by the SET CHLAUTH
command may be operable initially, the effect of the command will not
persist over a queue manager restart. The user should investigate,
ensuring the queue is accessible and then reissue the command (using
ACTION(REPLACE) ) before cycling the queue manager.
If the SYSTEM.CHLAUTH.DATA.QUEUE remains inaccessible at queue manager
startup, the cache of saved rules cannot be loaded and all channels
will be blocked until the queue and rules become accessible.
In Summary, each time you add, change, or delete a CHLAUTH rule, the queue manager updates does two things:
It updates the in-memory cache (running configuration)
It hardens the configuration by adding, updating, or removing messages in the SYSTEM.CHLAUTH.DATA.QUEUE. This is so that the running configuration will be available when the queue manager is restarted.
When the queue manager is restarted, it reads the messages from the SYSTEM.CHLAUTH.DATA.QUEUE to initially populate the in-memory cache (running configuration) with previously existing rules.
If you were to delete the messages from this queue and restart your queue manager you would find there would be no CHLAUTH records set.
A similar queue exists called SYSTEM.AUTH.DATA.QUEUE which holds the queue manager's OAM (authorization) rules. One difference between the CHLAUTH queue and this queue is that the AUTH queue is opened by an internal MQ process with MQOO_INPUT_EXCLUSIVE which means you can not open the queue at all.
Note that CHLAUTH was added at MQ v7.1. If a queue manager is created new under 7.1 or higher CHLAUTH will be ENABLED by default. If a queue manager is upgraded to MQ v7.1 or higher from a version prior to 7.1 then CHLAUTH will be DISABLED by default. No matter if it a new or upgraded queue manager, or if CHLAUTH is ENABLED or DISABLED, there are three default rules that will be in place (listed below).
BLOCKUSER rule to deny any MQADMIN user on all SRVCONN channels.
ADDRESSMAP rule to deny usage of any channel that starts with SYSTEM.* from any IP address.
ADDRESSMAP rule to allow connections to SYSTEM.ADMIN.SVRCONN from any IP address. Rule #1's restriction still applies.
The three default rules are likely directly related to the three messages you observed in the queue. In general leaving CHLAUTH ENABLED with the default rules is a good thing. I normally get rid of #3 because I do not have a channel with this name. You noted that CHLAUTH is disabled, if you have no intention of using this feature you could use saveqmgr or dmpmqcfg to dump MQSC commands to recreate these three default rules and then delete those three rules, this will remove the three messages on the SYSTEM.CHLAUTH.DATA.QUEUE.
If in the future you come to your senses and turn CHLAUTH back on, you can restore the rules you deleted with the backup created.

Channel attributes to provide barriers in MQ

What channel attributes can be utilized to provide barriers at different points in WebSphere MQ Inbound channel processing as an alternative to just simply disabling it?
From a security perspective, the approach of using multiple controls on a channel definition provides some redundancy. For example, many years ago researchers found a bug (which IBM quickly fixed) in the channel protocol that allowed the channel to start despite an MCAUSER that should have prevented the connection. If the MQ admin had relied entirely on the MCAUSER value, the channels would have been vulnerable until the fix was applied.
For that reason I generally advise to set multiple controls in the channels that you wish to be disabled. The idea is that if one control fails, the channel fails to a safe state. You could, for example, use CHLAUTH to disable a channel but if you then use the Broker's New Broker Configuration Wizard, the first thing it does (last time I checked) is to disable CHLAUTH. Whoops.
Here are a few of the attributes that can be used to disable a channel. Remember that if you apply these to a SYSTEM.DEF.* or SYSTEM.AUTO.* channel, you must override the attribute on creating a new legitimate channel so do not go overboard and use all of them. Or, more precisely, use one control for every layer of tin foil present in your favorite hat. ;-)
Set MCAUSER('*NOBODY') because the asterisk is not a valid character in a user ID.
Set the max message length to something shorter than the MQMD length.
Set SSLCIPH, SSLCAUTH(REQUIRED) and make sure SSLPEER has an impossible value.
Use CHLAUTH rules but make sure that when you create the rules that allow access on legitimate channels that they are not so broad as to authorize access to channels you did not intend.
Specify exits that do not exist.
Note that these address the issue of an attacker who does not already have access to the QMgr. Any attacker with admin access will keep templates on hand from a saveqmgr from each MQ version. This allows the attacker to submit DEFINE commands that contain all the required attributes and thus will not be reliant on the SYSTEM.* objects. The legitimate administrator, however, must use the same approach or at least be cognizant of which attributes must be overridden to define a new channel.
In short, this approach provides perimeter controls and the trade off is in administrative overhead. To be effective, use two or more unrelated controls (CHLAUTH, MCAUSER and invalid exit spec, for example), incorporate the setting into the training for administrators, and do not be tempted to use every possible control because the cost of doing so rises faster than the benefit.
The most appropriate thing to use to provide barriers for inbound channels is the CHLAUTH rules introduced in V7.1. This allows you to block/allow inbound channels based on IP Address (or host name in V8), remote queue manager name or client side user ID, or Certificate Subject's DN (and/or Issuer's DN in V8).

How to check if the channel is still working in streadway/amqp RabbitMQ client? [duplicate]

This question already has answers here:
How to detect dead RabbitMQ connection?
(4 answers)
Closed 10 months ago.
I'm using github.com/streadway/amqp for my program. How should I make sure that the channel I'm using for consumption and/or production is still working, before re-initializing it?
For example, in ruby, I could simply do:
bunny_client = Bunny.new({....})
bunny_client.start
to start the client, and
if not bunny_client or bunny_client.status != :connected
# re-initialize the client
How to do this with streadway/amqp client?
The QueueDeclare and QueueInspect functions may provide equivalent functionality. According to the docs:
QueueInspect:
Use this method to check how many unacknowledged messages reside in the queue and how many consumers are receiving deliveries and whether a queue by this name already exists.
If the queue by this name exists, use Channel.QueueDeclare check if it is declared with specific parameters.
If a queue by this name does not exist, an error will be returned and the channel will be closed.
QueueDeclare:
QueueDeclare declares a queue to hold messages and deliver to consumers. Declaring creates a queue if it doesn't already exist, or ensures that an existing queue matches the same parameters.
It looks like there's some good info regarding Durable queues (survive server restarts) in those docs as well.
I have worked around most connectivity issues by trial and error seeing which patterns will work and which won't. I think the best method is to flag a channel when using it (on error). If it fails to publish it gets flagged as well on library side. The errors that are received from server automatically terminate the channel anyways so it just tells my channel pool to rebuild channels when flagged.
You can use my library in golang as an example to create a connection/channelpool and how I flag channels on error.
https://github.com/houseofcat/turbocookedrabbit

MQPUT while using MQCB : MQRC_HCONN_ASYNC_ACTIVE

Our process needs to read messages from a topic on the local Q manager and also write to a different topic on the same local Q manager.
To read messages we have used MQCB. The messages reach the callback function of the process. However, while the callback remains registered, we are not able to MQPUT messages to a different topic.
We get an error that says:
2500 : MQRC_HCONN_ASYNC_ACTIVE
An attempt to issue an MQI call has been made while the connection is started
Apparently, a single connection handle cannot be used to both read and write. We have to Suspend the MQCB, MPUT the message and Resume the MQCB to get it to work.
Is there a way to avoid having to suspend and resume?
Thanks in advance
Yes, that is the expected behavior when using MQCB. Two approaches you can take:
1) Create another connection to the same queue manager to publish messages.
2) If your design is to publish messages whenever you receive a message on the callback function, then publish messages from callback function itself.
Update
MQRC_ALREADY_CONNECTED (2002) issue: What MQCNO_HANDLE_SHARE_* option have you used? Suggest you to use MQCNO_HANDLE_SHARE_BLOCK option to get around this problem. I wrote a sample program and created two connections on the same thread by using MQCNO_HANDLE_SHARE_BLOCK option.

WebSphere MQ v7.1 Channels going down

The sender and receiver channels between two queue managers (WebSphere MQ v7.1 running on Redhat Linux) that I have configured is going down pretty frequently. Any idea why? How can I debug this? Thanks.
Channels are expected to go down. The idea is that they stay active as long as there is traffic and then time out. Assuming they've been configured to trigger, the presence of a message on the XMitQ causes the channel to start up again.
The reason for this is that a triggered channel will generally restart if interrupted by a network failure or other adverse event. However if a channel is configured to stay running 24x7 then the only way it stops is due to one of these adverse events and that increases the likelihood that human intervention will be required to restart the channel. On the other hand, a channel that times out can survive all sorts of nasty network events that occur while it is inactive. Allowing it to time out when not in use thus improves overall reliability of the channel.
So how do you cause a channel to trigger? Make sure the transmission queue contains the TRIGGER, TRIGTYPE, TRIGDATA and INITQ attributes. For example, to define a transmission queue to the JUPITER QMgr:
DEF QL(JUPITER) +
USAGE(XMITQ) +
TRIGGER +
TRIGTYPE(FIRST) +
TRIGDATA('MYQMGR.JUPITER') +
INITQ(SYSTEM.CHANNEL.INITQ) +
REPLACE
The only variable of the bunch is TRIGDATA which contains the name of the channel serving this XMitQ.
Of course, the channel initiator must be running but in modern versions of WMQ it starts by default (based on the value of the queue manager's SCHINIT attribute) so generally will in fact be running.
The channel that is in STOPPED state cannot be triggered. By default the STOP CHL command uses STATUS(STOPPED) so most of the time manually stopping a channel prevents triggering. If you want to stop a channel in such a way that it will restart (for example to test triggering) use the STOP CHL(CHLNAME) STATUS(INACTIVE) command. If the channel is already in STOPPED state, either issue the START CHL command to make it start immediately or use the STOP CHL(CHLNAME) STATUS(INACTIVE) to change the status from STOPPED to INACTIVE without starting it.
Once the channels are up, the DISCINT attribute of the channel determines how long it will run before timing out. The value is in seconds and defaults to 600 which is 10 minutes. The DISCINT, KAINT and HBINT combine to determine when the channel comes down. Note that the TCP spec calls for things using keepalive to disable them by default so if you want to use keepalive on your channels, you must enable it in the QMgr tuning as described here.
Please see Triggering Channels in the Infocenter for more on the configuration details. Take a look at SupportPac MD0C WebSphere MQ - Keeping Channels Up and Running if you want to know more about the internals and tuning. (The SupportPac is a bit dated but the principles of tuning mostly still apply. Where there are discrepancies, the Infocenter is the authoritative version.)
If you want to keep channels up continuously, set DISCINT(0) but remember that triggering remains the preferred option. Some shops need to minimize response times during the business day and so set DISCINT to a value that allows the channels to time out at night but generally keeps them running all day. If for some reason you have triggering set up right and the channels go down prior to DISCIINT you should be able to check in the error logs for the reason why. These reside in the QMgr's directory under errors. For example, on UNIX/Linux they are in /var/mqm/qmgrs/qmgrname/errors and on Windows the default location is C:\Program Files(x86)\WebSphere MQ\QMgrs\qmgrname\errors. Look for the files named AMQERR??.LOG where ?? = 01, 02, or 03. The logs rotate where 01 is current, 02 is next and so on. If you have a very busy QMgr you need to capture these as soon as the channel goes down or they could roll off.

Resources