Intended use of Transmission Queue - ibm-mq

This is very basic question about IBM WebSphere MQ V7.
Regarding the Transmission Queue, my understanding is it is only used with remote queue that resides in the same queue manager. Therefore, if I want to put message to the queue, I need to put it to remote queue.
It is like this.
App --> Remote queue --> Transmission Queue
My question is:
Is it possible to put the message directly into transmission queue like this?
App --> Transmission Queue
--Modified on 2014.03.17 --
I found a way to put message directly into transmission queue. I do not know this is a common use, but in order to do that I needed to prepend MQXQH to the message. I tried and confirmed it works. See the Infocenter reference here.

Do not ever put directly to a transmission queue. It is dangerous if you do not know what you are doing.
You should put your message to a remote queue. A remote queue is not the same as a local queue. A remote queue is simply a pointer to a queue on another queue manager.

Although it is possible to put messages directly on the XMitQ, there is considerable risk in allowing that to occur so most admins will prevent applications from directly accessing that queue. As you have found, it is possible to construct a message with the transmission queue header and behind that a normal message with the MQMD and payload. (This is, in fact, excatly how the MCA works.)
The problem here is that the QMgr does not check the values in the MQMD residing in the payload so you can put mqm as the MQMD.UserID and then address the message to the remote command queue and grant yourself admin access to that remote QMgr.
Security-conscious administrators typically use two security controls to prevent this. First, they disallow direct access to the XMitQ. That helps for outbound messages. More importantly, they set the MCAUSER of their RCVR/RQSTR/CLUSRCVR channels to a non-admin user ID that is not authorized to put messages onto any sensitive queues.
The other issue is, of course, that what you describe completely defeats WMQ's name resolution. By embedding routing into the app, you prevent the administrator from adjusting channel weights, cluster settings, failover and load distribution at the network level. Need to redistribute traffic? Redeploy the code. Not a good plan.
So for security reasons and because you paid a lot of money to get WMQ's reliability - much of which comes from dynamic addressing and name resolution features - coding apps to write directly to the XMitQ is strongly discouraged.

You should not directly be using the transmission queue. Its used by the message channel agent (MCA) as temporary storage when sending messages across to a remote queue manager.
This is distributed queuing - i.e you publish a message to Queue Manager A, and want it routed to a local queue on Queue Manager B. So you define a reference on QM-A referring to the local queue on QM-B. This reference is the 'remote queue definition'.
The remote queue definition specifies the transmission queue name. The transmission queue is bound to the MCA, which in turn knows about the remote QM.

Related

What is the difference between remote, local and alias queues

Can somebody help to understand the basic of these 3 queues with example.when do we use all 3
Perhaps a more simple explanation: Think of a local queue as a queue which exists on the queue manager it is defined.. you can PUT and GET messages off a local queue. A remote queue is like a pointer to a queue on another queue manager which is usually on different host. Therefore messages can be PUT to it (and they will usually arrive on a local queue at that remote host) but you cannot GET messages from a remote queue.
Simply put, a queue manager only ever hosts messages on local or transmition queues on that queue manager. If you want to go to another queue manager, you can use definitions which tell the queue manager that the 'put' is running on, how to route the message to a destination queue manager - however this ends up with a message on a transmit queue which is then picked up and sent down a channel towards that destination. Alias queues are just an opportunity to use a different name for another queue. Remote queues are definitions on one queuemanager with information about where the message should be routed.
MQ documentation:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.explorer.doc/e_queues.htm
Another description:
https://www.scribd.com/doc/44887119/Different-Types-Queues-in-Websphere-MQ
A queue is known to a program as local if it is owned by the queue manager to which the program is connected; the queue is known as remote if it is owned by a different queue manager. The important difference between these two types of queue is that you can get messages only from local queues. (You can put messages on both types of queue.)
References:
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.0.1/com.ibm.mq.csqzal.doc/fg10950_.htm
https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.explorer.doc/e_queues.htm
An application which is connected to a local QueueManager which host a queue, So that queue will be local queue for that application. An application which is connected to a QueueManager which is located remotely and that host a queue that queue will be remote queue. we should always keeps in mind that we always read the message from the local queue. A message put on the remote queue, will be routed to local queue through the object defined on the local queue thrugh the channel and the transmission queue.

Remote Queue Manager backup using dmpmqcfg is not working

While taking backup of remote queue manager using dmpmqcfg command I am getting MQSC timed out waiting for a response from command server. Below is the command I am using.
dmpmqcfg -m <Queue Manager> -r <remote qmgr> -x all -a -o mqsc > D:\RemoteQueueManagerBackup\RemoteQmgr.txt
Before running this command I created a server and requester channel at both queue manager ends and when executed the above command channels are coming into running state but not able to take backup.
For few queue managers its working fine and for few its not working. I checked all the properties between running qmgr and other both looks same.
Update
Xmitq queue with the name queue manager is defined both ends eg: Queue Manager A Xmitq as B, For Queue Manager B Xmitq as A. Server and Requester channels are created. When the command is triggered channel are coming into Running state.
Any Errors returned from dmpmqcfg? --- No.
Whether DLQ is enabled and a queue of the specified name defined? - Yes
Presence or absence of messages on the DLQ at either end? - Messages are getting stored in Remote Queue Manager Dead Letter Queue.
Whether channels between the two QMgrs run when manually started (this can be a triggering problem instead of name resolution)? - Channels are running automatically ( triggering is enabled) No
Auth related errors? The only error we are seeing is
AMQ9544: Messages not put to destination queue.
Explanation: During the processing of channel "x.Y" one or more
messages could not be put to the destination queue and attempts were
made to put them to a dead-letter queue. The location of the queue is
2, where 1 is the local dead-letter queue and 2 is the remote
dead-letter queue
Action: Examine the contents of the DLQ. Each message is contained in
a structure that describes why the message was put to the queue, and
to where it was originally addressed. Also look at previous error
messages to see if the attempt to put messages to a dlq failed. The
program identifier (PDI) of the processing program was '5608(10796)'
Examined Remote Dead Letter Queue Reason:
MQRC_PERSISTENCE_NOT_ALLOWED
You do not mention the details of the channels and their XMitQs. In order for the messages to get to the remote machine and the replies to get back each QMgr needs to be able to resolve a path to the other. That means something must have the name of the remote QMgr. That something must either be the XMitQ itself or a QMgr alias that points to the XMitQ.
For example you have <Queue Manager> and <remote qmgr>. The local QMgr may have a transmit queue named <remote qmgr> which resolves that destination. Since you do not report any errors from dmpmqcfg, I will assume the messages are getting sent out OK.
That means the messages are not getting back. This may be because the remote QMgr's XMitQ has a name like <Queue Manager>.XMITQ instead of <Queue Manager>. This can be solved by defining a QMgr alias:
DEF QREMOTE('<Queue Manager>') +
XMITQ('<Queue Manager>.XMITQ') +
RNAME(' ') +
RQMNAME('<Queue Manager>') +
REPLACE
If this is in fact what's wrong, you should see some messages in the DLQ on the remote QMgr - assuming that one is defined and specified in the QMgr attributes.
If this does not solve the problem, please provide additional information including:
Any errors returned from dmpmqcfg
Whether DLQ is enabled and a queue of the specified name defined
Presence or absence of messages on the DLQ at either end
Whether channels between the two QMgrs run when manually started (this can be a triggering problem instead of name resolution)
Platforms of the QMgrs involved
Any relevant entries from the error logs, including auths errors (for example, mqm is a valid ID on Linux but not on Windows so would generate auths errors in this scenario if the sending QMgr were Linux and the remote QMgr Windows)
Update responding to additional info in question
So it looks like you have at least a couple of issues. The one that you have discovered is that temporary dynamic queues do not take persistent messages. If you think about it, this makes perfect sense. If a message is persistent, this tells MQ to take all precautions against losing it. But a temporary dynamic queue is deleted when the last input handle is released, discarding any messages that it holds.
Any attempt to put a persistent message onto a temporary dynamic queue sends conflicting signals to MQ. Should it accept a persistent message that it knows will be implicitly deleted? Or should it not delete the temporary dynamic queue in order to preserve the messages? Rather than try to guess the user's intent, it simply disallows the action. Thus, your persistent reply messages arrive at the local QMgr, find a temporary dynamic queue, and then are diverted to a DLQ.
You have already found the solution to this problem. Well, one of the solutions to this problem, anyway -- alter DEFPSIST so that the messages are non-persistent. Another solution would be to use the client connection capabilities of dmpmqcfg to connect directly to the remote QMgrs rather than routing through a local QMgr.
As to the remaining few QMgrs, you need to run the same diagnostic again. Check for error messages, depth in DLQ at both ends, channels running, auth errors. Remember that the resource errors, auth errors, routing problems, etc. can occur at either end so look for error messages and misrouted messages at both QMgrs. Also, verify channels by sending messages to and from both QMgrs to a known queue such as SYSTEM.DEFAULT.LOCAL.QUEUE or an application queue.
Another useful technique when playing a game of "where's my message" is to trace the message flow by stopping the channels in both directions prior to sending the commands. You can then run dmpmqcfg and see depth on the outbound XMitQ to verify the commands were sent. (You will have to GET-enable the XMitQ if you want to browse the messages since the channel agent GET-disables it. This will let you verify their persistence, expiry values, etc.)
Assuming the commands look OK you start the outbound channel only and let the messages flow to the remote QMgr where they are processed. Since the return channel is still stopped, the replies stack up in the return XMitQ. You can view them there to determine things like their persistence, expiry, and return codes/results of the command. If they look OK, start the channel and then look for them on the local QMgr.
For the few QMgrs where you still have issues, you should easily be able to find out where the messages are getting lost or being discarded. Keep in mind that non-persistent messages are sent across a channel outside any units of work so if there is no valid destination (or they aren't authorized to it) they are discarded silently. This diagnostic method of trapping them on XMitQs isolates each step so that if they are being discarded you can find out where and why.

When to choose a remote queue design versus local queue for get/put activities

I'm trying to figure out under what conditions I would want to implement a remote queue versus a local one for 2 endpoint applications.
Consider this scenario: App A on Server A needs to send messages to App B on Server B via MQServer1.
It seems like the simplest configuration would be to create a single local queue on MQServer1 and configure AppA to put messages to the local queue while configuring AppB to get messages from the same local queue. Both AppA and AppB would connect to the same Queue Manager but execute different commands.
What sort of circumstances would require the need to install another MQ server (e.g. MQServer2) and configure a remote queue on MQServer1 which instead sends the messages from AppA over a channel to a local queue on MQServer2 to be consumed by AppB?
I believe I understand the benefit of remote queuing but I'm not sure when it's best used over a more simpler design.
Here are some problems with what you call the simpler design that you don't have with remote queuing:-
Time Independance - Server1 has to be available all the time, whereas with a remote queue, once the messages have been moved to Server B, Server A and Server 1 don't need to be online when App B wants to get its messages.
Network Efficiency - with two client applications putting or getting from a central queue, you have two inefficient network hops, instead of one efficient channel batched network connection from Server A to Server B (no need for Server 1 in the middle)
Network Problems - No network, no messages. Whereas when they are stored locally, any that have already arrived can be processed even while the network is down. Likewise, the application putting messages is also not held up by a network problem, the messages sit on the transmit queue easy to be moved, and the application can get on with the next thing.
Of course your applications should be written so that they aren't even aware of the difference, and it's just configuration changes that switch you from one design to the other.
Here we can have separate Queue Manager for both the application.Application A will send the message on to the queue defined on local Queue Manager, which in turn transmit it to the Transmission queue via defined channels (Need to do configuration for that in the QueueManager) which in turn send it to the Local queue of the Application B.

How does the DEADQ work for SVRCONN channel in MQ?

i've a question about the DEADQ in MQ. I know that DEADQ has been used when the msg cannot be delived into the target queue, for example queue full or put inhibited like this. But if the client applicatoin connects to QMGR through SVRCONN channel and that target queue is full right now, will the msg sent by client application go to DEADQ or just that put operation would return with failure saying queue full?
if it works as the latter, does it mean the DEADQ not being used for client-server mode, like through SVRCONN channel?
Thanks
The DLQ is used by QMgr-to-QMgr channels because at that point the message has been entrusted to WMQ to deliver and/or persist as necessary. There is no option to directly notify the sending application in the API call that something went wrong. Best the QMgr can do is to send back a report message, if the app has requested it, and if the app specified a reply-to queue.
When an application is connected over a client channel the QMgr can tell the application during the PUT API call "hey, the destination queue is full," and let the application decide the appropriate course of action. Usually that "correct action" does NOT include putting the message onto the DLQ. In fact, routing this type of message to a secondary queue is VERY unusual. Conventional wisdom says it's better for the app to raise an alert than to provision overflow queues.
The one exception to this is when using JMS classes. These will move a poison message (one where the backout count exceeds the BOTHRESH value on the queue) to a backout queue. If the backout queue is not specified or the app is not authorized to it or if it is full, the app will then try to use the DLQ. Again, allowing an app to directly put messages to the DLQ is very unusual and not considered good practice, but it can be done.

Build durable architecture with Websphere MQ clients

How can you create a durable architecture environment using MQ Client and server if the clients don't allow you to persist messages nor do they allow for assured delivery?
Just trying to figure out how you can build a salable / durable architecture if the clients don't appear to contain any of the necessary components required to persist data.
Thanks,
S
Middleware messaging was born of the need to persist data locally to mitigate the effects of failures of the remote node or of the network. The idea at the time was that the queue manager was installed locally on the box where the application lives and was treated as part of the transport stack. For instance you might install TCP and WMQ as a transport and some apps would use TCP while others used WMQ.
In the intervening 20 years, the original problems that led to the creation of MQSeries (Now WebSphere MQ) have largely been solved. The networks have improved by several nines of availability and high availability hardware and software clustering have provided options to keep the different components available 24x7.
So the practices in widespread use today to address your question follow two basic approaches. Either make the components highly available so that the client can always find a messaging server, or put a QMgr where the application lives in order to provide local queueing.
The default operation of MQ is that when a message is sent (MQPUT or in JMS terms producer.send), the application does not get a response back on the MQPUT call until the message has reached a queue on a queue manager. i.e. MQPUT is a synchronous call, and if you get a completion code of OK, that means that the queue manager to which the client application is connected has received the message successfully. It may not yet have reached its ultimate destination, but it has reached the protection of an MQ Server, and therefore you can rely on MQ to look after the message and forward it on to where it needs to get to.
Whether client connected, or locally bound to the queue manager, applications sending messages are responsible for their data until an MQPUT call returns successfully. Similarly, receiving applications are responsible for their data once they get it from a successful MQGET (or JMS consumer.receive) call.
There are multiple levels of message protection are available.
If you are using non-persistent messages and asynchronous PUTs, then you are effectively saying it doesn't matter too much whether the messages reach their destination (although they generally will).
If you want MQ to really look after your messages, use synchronous PUTs as described above, persistent messages, and perform your PUTs and GETs within transactions (aka syncpoint) so you have full application control over the commit points.
If you have very unreliable networks such that you expect to regularly fail to get the messages to a server, and expect to need regular retries such that you need client-side message protection, one option you could investigate is MQ Telemetry (e.g. in WebSphere MQ V7.1) which is designed for low bandwidth and/or unreliable network communications, as a route into the wider MQ.

Resources