I am trying to setup message channel on IBM MQ v8.
I am running IBM MQ Server 8.x on Ubuntu Linux.
I have 2 queue managers QM1, and QM2.
On QM1, I have created a Sender Channel, and on QM2, I have created a Receiver channel.
On QM1:
Remote queue definition
DEFINE QREMOTE(RMQ1) DESCR('Remote queue for QM2') REPLACE +
PUT(ENABLED) XMITQ(QM2) RNAME(Q_ON_QM2) RQMNAME(QM2)
Transmission queue definition
DEFINE QLOCAL(QM2) DESCR('Transmission queue to QM2') REPLACE +
USAGE(XMITQ) PUT(ENABLED) GET(ENABLED) TRIGGER TRIGTYPE(FIRST) +
TRIGDATA(QM1.TO.QM2) INITQ(SYSTEM.CHANNEL.INITQ)
Sender channel definition for a TCP/IP connection:
DEFINE CHANNEL(QM1.TO.QM2) CHLTYPE(SDR) TRPTYPE(TCP) +
REPLACE DESCR('Sender channel to QM2') XMITQ(QM2) +
CONNAME('127.0.0.1(**1491**)') //-- QM2's listener is on 1490
On 2nd Queue manager (QM2)
Local queue definition
DEFINE QLOCAL(Q_ON_QM2) REPLACE PUT(ENABLED) GET(ENABLED) +
DESCR('Local queue ')
Receiver channel definition
For a TCP/IP connection:
DEFINE CHANNEL(QM1.TO.QM2) CHLTYPE(RCVR) TRPTYPE(TCP) +
REPLACE DESCR('Receiver channel from QM1')
At the end of configuration, my sender channel remains in "Retrying" state, and Receiver channel remains in "inactive" state.
How do I get this channel running?
At first glance, it appears the problem is with your port.
The conname for connection should specify the port where the listener is actually running. Is it 1491 or 1490?
CONNAME('127.0.0.1(1491)') //-- QM2's listener is on 1490
Verify the listener is running for the receiving qmgr and specify that port in your conname.
There could be many reasons of a sender channel going in retrying state.
1. Wrong parameters.
Check connection name as Valerie suggested. Make sure IP address and port number points to the receiver queue manager.
2. Transmission queue unavailable.
Make sure the transmission queue is available. Note: Sometimes the transmission queue will be available but it might be GET disabled, in that case also the sender channel will go in retrying state. The sender channel opens the transmission queue in exclusive mode, which means if the transmission queue is opened by another application (say RFHUTIL), then the sender channel will not be able to access transmission queue and hence the channel will go in retrying state. So, make sure the transmission queue is not opened by some other application.
3. Receiver channel unavailable.
This could be the case when the receiver queue manager is down.
Also, make sure the name of receiver channel is same as the sender channel(which seems correct in your case).
4. Receiver channel and sender channel go out of sequence
The receiver channel and sender channel maintains a sequence number for message transmission. Due to environmental issues like network glitches, the sequence number might become inconsistent between the sender and receiver channel.
RESET your sender and receiver channels to overcome this issue.
Related
I am facing a scenario where the reply queue I connect to, runs out of handles. I have traced it to the fact that my JMS Producers are being cached but not my JMS consumers. I am able to send and receive messages just fine so there is no problem with connecting-sending-receiving to/from the queues. I am using the CachedConnectionFactory (SessionCacheSize = 10)with the target factory as com.ibm.mq.jms.MQQueueConnectionFactory while instantiating the jmsTemplate. Code snippet is as follows
:
:
String replyQueue = "MyQueue";// replyQueue which runs out of handles
messageCreator.setReplyToQueue(new MQQueue(replyQueue));
jmsTemplate.setReceiveTimeout(receiveTimeout);
jmsTemplate.send(destination, messageCreator);// Send to destination queue
Message message = jmsTemplate.receiveSelected(replyQueue,
String.format("JMSCorrelationID = '%s'", messageCreator.getMessageId()));
:
:
From the logs (jms TRACE is enabled) Producer is cached, so the destination queue "handle count" does not increase.
// The first time around (for Producer)
Registering cached JMS MessageProducer for destination[queue:///<destination>:com.ibm.mq.jms.MQQueueSender#c9x758b
// Second time around, the cached producer is reused
Found cached JMS MessageProducer for destination [queue:///<destination>]: com.ibm.mq.jms.MQQueueSender#c9x758b
However, the handles for the replyQueue keep increasing because for every call to that queue, I see a new JMS Consumer being registered. Ultimately the calls to open the replyQueue fail because of MQRC_HANDLE_NOT_AVAILABLE
// First time around
Registering cached JMS MessageConsumer for destination [queue:///<replyQueue>]:com.ibm.mq.jms.MQQueueReceiver#b3ytd25b
// Second time around, another MessageConsumer is registered !
Registering cached JMS MessageConsumer for destination [queue:///<replyQueue>]:com.ibm.mq.jms.MQQueueReceiver#re25b
My memory is a bit dim on this, but here is what is happening. You are receiving messages based on a message selector. This selector is always changing, however. As a test, either remove the selector or make it a constant and see what happens. So when you try to cache/pool based on connection/session/consumer, the consumer is always changing. This requires a new cache entry.
After you go through your 10 sessions, a new connection will be created, but the existing one is not closed. Increase your session count to 100, for example, and your connection count on the MQ broker should climb 10 time slower.
You need to create a new consumer for every message receive as your correlation ID is always changing. So just cache connection/session. No matter what you do, you will always have to round trip to the broker to ask for the new correlation ID.
Would need advice on the following scenario (for my personal learning):
Setup is as follows: QM1 -> QM2 -> QM3
QM1 - 1 alias queue (that will put message to the remote queue), 1 remote queue (destined for QM2's local queue), 1 transmission queue (to QM2) and 1 sender channel to QM2
QREMOTE DEFN as follows:
DEFINE QREMOTE('QM1.RQ1') RQMNAME('QM2') RNAME('QM2.LQ1') XMITQ('QM2') DEFPSIST(YES)
QM2 - 1 local queue (to receive message from QM1), 1 transmission queue (to QM3), 1 receiver channel from QM1 and 1 sender channel to QM3
QM3 - 1 local queue (to receive messages) and 1 receiver channel (between QM2 and QM3)
Note: QM1 and QM2 are connected, QM2 and QM3 are connected, but messages from QM1 to QM3 needs to be passed through QM2 to reach the local queue on QM3.
Question: Without modifying any settings on QM2 and QM3, what to configure on QM1 in order to send message to QM3's local queue from QM1?
Change your QREMOTE as follows:-
ALTER QREMOTE('QM1.RQ1') RQMNAME('QM3') RNAME('QM3.LQ1') XMITQ('QM2')
As you can see, the message put to this queue still goes on the QM2 transmission queue and when it is moved by the SDR/RCVR channel over as far as QM2, the RCVR channel will do an MQPUT to queue=QM3.LQ1 on qmgr=QM3 and this will then resolve the message to be placed onto the QM3 transmission queue, where the next SDR/RCVR channel will move it to QM3.
I use OMNeT++-4.6, sumo-0.22.0 and Veins-4a2.
In my scenario, when an RSU receives a message from a node, it sends an ACK using the prepareWSM method:
sendWSM(prepareWSM("ack", ackLengthBits, type_SCH, ackPriority, senderId , 2))
So, the RSU sends an ACK to senderID which is the sender node of the message.
In my log file, I notice that there are some nodes - not only the original sender node - which receive this ACK.
I need to know if prepareWSM method diffuse the ACK to all nodes encountered or if what I did to send only the ACK to the sender node is correct?
Although you can set the receiver address for the WaveShortMessage, it is ignored in the Mac1609_4.cc (line 178 ff.), since originally only broadcast transmission is used in C2X-communication:
//send the packet
Mac80211Pkt* mac = new Mac80211Pkt(pktToSend->getName(), pktToSend->getKind());
mac->setDestAddr(LAddress::L2BROADCAST);
To achieve your wished acknowledgement system you have to check the recipient address of each message you receive in the APP layer and ignore messages which are not addressed to your address (myId).
I think I tried to start a channel that is already running or whatever. Whenever I start the sender channel, the receiver channel goes to a PAUSED state. I looked it up and found something about AdoptNewMCA configuration, not sure how to set it at the queue manager level. How do I fix this smoothly. Merely stopping and restarting the channels does not do it.
Error log says:
/02/2012 12:38:41 PM - Process(19161.269) User(mqm) Program(amqrmppa)
Host() Installation(Installation1)
VRMF(7.1.0.0) QMgr(QM_TEST2)
AMQ9514: Channel 'QM_TEST1.TO.QM_TEST2' is in use.
EXPLANATION: The requested operation failed because channel
''QM_TEST1.TO.QM_TEST2' is currently active. ACTION: Either end the channel
manually, or wait for it to close, and retry the operation.
----- amqrcsia.c : 1042 -------------------------------------------------------
08/02/2012 12:38:41 PM - Process(19161.269) User(mqm) Program(amqrmppa)
Host(...) Installation(Installation1)
VRMF(7.1.0.0) QMgr(QM_TEST2)
AMQ9999: Channel ''QM_TEST1.TO.QM_TEST2' to host '17.2.33.44' ended abnormally.
EXPLANATION: The channel program running under process ID 19161 for
channel ''QM_TEST1.TO.QM_TEST2' ended abnormally. The host name is
'17.2.33.44'; in some cases the host name cannot be
determined and so is shown as '????'. ACTION: Look at previous error
messages for the channel program in the error logs to determine the
cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or
"SuppressMessage" attributes under the "QMErrorLog" stanza in qm.ini.
Further information can be found in the System Administration Guide.
----- amqrmrsa.c : 887 --------------------------------------------------------
When looking these things up, I'd start first with the product manuals. In this case, the Infocenter topic on channel states says that a channel in PAUSED state is waiting on a retry interval. The sub-topic on channel errors explains why sending or receiving channels can be in retry:
If a channel is unable to put a message to the target queue because
that queue is full or put inhibited, the channel can retry the
operation a number of times (specified in the message-retry count
attribute) at a time interval (specified in the message-retry interval
attribute). Alternatively, you can write your own message-retry exit
that determines which circumstances cause a retry, and the number of
attempts made. The channel goes to PAUSED state while waiting for the
message-retry interval to finish.
So if you stop your channels, you should see a message in the XMitQ on the sending side. If you GET-enable that queue you can browse the message, look at the header and see which queue it is destined for. On the receiving side, look to see if that queue is full.
Classic fast-sender, slow-consumer problem here. If the consumer can't keep up, the messages back up on the receiving QMgr, then the channel goes to retry and they begin to back up on the sending QMgr. Got to monitor depth and input handles on request queues.
Make sure a DLQ is set.
Try reducing the message retry count to 1 to speed up use of the DLQ.
We are attempting to consolidate the DLQs across the board in our enterprise, into a single Q (an Enterprise_DLQ if you will...). We have a mix of QMs on various platforms - Mainframe, various Unix flavours - Linux,AIX,Solaris etc., Windows, AS/400....
The idea was to configure the DLQ on the QM (set the DEADQ attribute on the QM) to that of the ENTERPRISE_DLQ which is a Cluster Q. All the QMs in the Enterprise are members of the Cluster. This approach, however does not seem to work when we tested it.
I have tested this by setting up a simple Cluster with 4 QMs. On one of the QM, defined a QRemote to a non-existent QM and non-existent Q, but a valid xmitq and configure the requsite SDR chl between the QMs as follows:
QM_FR - Full_Repos
QM1, QM2, QM3 - members of the Cluster
QM_FR hosts ENTERPRISE_DLQ which is advertised to the Cluster
On QM3 setup the following:
QM3.QM1 - sdr to QM1, ql(QM1) with usage xmitq, qr(qr.not_exist) rqmname(not_exist) rname(not_exist) xmitq(qm1), setup QM1 to trigger-start QM3.QM1 when a msg arrives on QM1
On QM1:
QM3.QM1 - rcvr chl, ql(local_dlq), ql(qa.enterise_dlq), qr(qr.enterprise.dlq)
Test 1:
Set deadq on QM1 to ENTERPRISE_DLQ, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - ENTERPRISE_DLQ!!
ql(qm1) curdepth(1)
Test 2:
Set deadq on QM1 to qr.enterprise.dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - qr.enterprise.dlq (all caps)!!
ql(qm1) curdepth(2)
Test 3:
Set deadq on QM1 to qa.enterise_dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - qa.enterise_dlq (all caps)!!
ql(qm1) curdepth(3)
Test 4:
Set deadq on QM1 to local_dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RUNNING, all msgs on QM3 ql(QM1) make it to local_dlq on QM3.
ql(qm1) curdepth(0)
Now the question: Looks like the DLQ on a QM must be a local queue. Is this a correct conclusion? If not, how can I make all the DLQs msg go to a single Q - Enterprise_DLQ above?
One obvious solution is to define a trigger on local_dlq on QM3 (and do the same on others QMs) which will read the msg and write it to the Cluster Q - ENTERPRISE_DLQ. But this involves additional moving parts - trigger, trigger monitor on each QM. It is most desirable to be able to configure a Cluster Q/QRemote/QAlias to be a DLQ on the QM. Thoughts/ideas???
Thanks
-Ravi
Per the documentation here:
A dead-letter queue has no special requirements except that:
It must be a local queue
Its MAXMSGL (maximum message length) attribute must enable the queue to accommodate the largest messages that the queue manager has
to handle plus the size of the dead-letter header (MQDLH)
The DLQ provides a means for a QMgr to handle messages that a channel was unable to deliver. If the DLQ were not local then the error handling for channels would itself be dependent on channels. This would present something of an architectural design flaw.
The prescribed way to do what you require is to trigger a job to forward the messages to the remote queue. This way whenever a message hits the DLQ, the triggered job fires up and forwards the messages. If you didn't want to write such a program, you could easily use a bit of shell or Perl code and the Q program from SupportPac MA01. It would be advisable that the channels used to send such messages off the QMgr would be set to not use the DLQ. Ideally, these would exist in a dedicated cluster so that DLQ traffic did not mix with application traffic.
Also, be aware that one of the functions of the DLQ is to move messages out of the XMitQ if a conversion error prevents them from being sent. Forwarding them to a central location would have the effect of putting them back onto the cluster XMitQ. Similarly, if the destination filled up, these messages would also sit on the sending qMgr's cluster XMitQ. If they built up there in sufficient numbers, a full cluster XMitQ would prevent all cluster channels from working. In that event you'd need some kind of tooling to let you selectively delete or move messages out of the cluster XMitQ which would be a bit challenging.
With all that in mind, the requirement would seem to present more challenges than it solves. Recommendation: error handling for channels is best handled without further use of channels - i.e. locally.