I'm creating a simple MessageBroker flow. It goes MQInput Node -> Compute -> MQOutput Node where all Compute does is:
CALL CopyEntireMessage();
SET OutputRoot.Properties.MessageFormat='XML1';
It should only change the message format from Binary1 to XML1. However, the MQOutput Node fails and sends the message along its Failure connection. I'm unclear as to the reasons a MQOutput node could fail?
You should check the following points:
Check the parsing options of compute node in the flow (properties--advanced)
Specify the Message set name in the esql.
Specify the output queue name (reply to) in the esql.
check input and output queues are present in the MQ
Check if the function you are calling is there in the esql.
It would be easier to help if you can provide me the failure message.
Related
Would appreciate if anyone is able to assist/provide some sort of a guide/tutorial for using IBM IIB (Integrated Toolkit) and IBM MQ, making use of MQ Input Node, Compute Node and MQ Output Node, such that when a message is put on the input queue, it will be routed to an output queue based on the MQRFH2 headers and USR properties set/defined in the compute node (ESQL file)
E.g. If MQRFH2/USR = 1, route message to Queue 1, IF MQRFH2/USR = 2, route message to Queue 2, etc.
Thanks in advance.
Please read Accessing the MQRFH2 header and Populating Destination in the local environment tree.
Then you could write your ESQL like this: (assuming the RFH2 routing variable is named Ker)
CREATE COMPUTE MODULE Routing_Compute
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
SET OutputLocalEnvironment.Destination.MQ.DestinationData[1].queueName =
CASE InputRoot.MQRFH2.usr.Ker
WHEN '1' THEN 'Q1'
WHEN '2' THEN 'Q2'
ELSE 'Q3'
END;
RETURN TRUE;
END;
END MODULE;
Remember to change the default node configuration like this:
Compute: Set Compute mode to LocalEnvironment
MQ Ouput: Set Destination mode to Destination List
Example: If input message header Ker has value 2, then it will be routed to queue Q2.
I have two clusters CLUSMES and CLUSHUBS. Each cluster has two queue managers.
Cluster CLUSMES has QMGRS: QMGR1A and QMGR1B
Cluster CLUSHUBS has QMGRS: QMGR3A and QMGR3B
There is a Gateway QMGR: QMGR2, which forms the overlap and is a partial repository in each MQ cluster.
Request messages are sent out from either QMGR1A/B to either QMGR3A/B via QMGR2 which acts as a cluster Load balance to QMGR3A/B (This works fine) and a reply is expected back to the sending QMGR.
All channel connectivity is in place and fully functional. The problem is how to return the message from where it came from. The replying QMGR connects to QMGR3A/B and issues a put. I will either get a REMOTE_QMGR not found (MQRC 2087) or a MQ Object not found (MQRC 2085) depending on how I have it configured. The message header of the message contains the ReplytoQueue and ReplyToQMgr properly. I would like to have the replying application just issue a put and have it delivered to the proper queue in CLUSMES, but this is proving to be extremely difficult. I have played with Remote QMGR Alias and QAlias on the GateWay Qmgr: QMGR2, but no luck. There is got to be a simple trick to this and there are plenty of examples, but I have not been able to implement one successfully. A clear cut example of what my return path should be would be most helpful. Keep in mind, that the ReplyToQMgris in the MQMD and resolution needs to occur from that. I need resolution to occur at the QMGR2 level, where both clusters are known. Concrete full suggestions appreciated.
MQ Definitions on the QMGR1A/B, where the REPLY is expected:
DEFINE QLOCAL('SERVER.REPLYQ') CLUSTER('CLUSMES')
On QMGR2 ( The Gateway for message hoping)
DEFINE NAMELIST(CLUSTEST) NAMES(CLUSMES,CLUSHUBS)
DEFINE QALIAS(SERVER.ALIAS.REPLYQ) TARGQ(SERVER.REPLYQ) CLUSTER(CLUSTEST) DEFBIND(NOTFIXED)
DEFINE QREMOTE(QMGR1A) RNAME(' ') RQMNAME(QMGR1A) XMITQ('') CLUSTER(CLUSHUBS)
DEFINE QREMOTE(QMGR1B) RNAME(' ') RQMNAME(QMGR1B) XMITQ('') CLUSTER(CLUSHUBS)
On MQMGR3A/B QALIAS(SERVER.ALIAS.REPLYQ) cluster queue. The Gateway QMGR could not resolve the baseQ: mqrc_unknown_alias_base_q 2082
This was the configuration when trying to resolve it using the cluster.
When the request message is sent by the application it would specify ReplyToQMgr of either QMGR1A and QMGR1B and ReplytoQueue with the name of the queue that is present on QMGR1A and QMGR1B, the reply queue need not be clustered.
On gateway queue manager QMGR2 you would define the following objects:
DEFINE QREMOTE(QMGR1A) RNAME('') RQMNAME(QMGR1A) XMITQ('') CLUSTER(CLUSHUBS)
DEFINE QREMOTE(QMGR1B) RNAME('') RQMNAME(QMGR1B) XMITQ('') CLUSTER(CLUSHUBS)
This would allows any queue manager in the cluster CLUSHUBS to route reply messages back to QMGR1A and QMGR1B via gateway queue manager QMGR2.
If you want to limit queues on QMGR1A and QMGR1B that queue managers in the CLUSHUBS cluster can put to you would need to take a different approach. Let me know if that is what you need and I will update my answer with some suggestions.
I have a WTX map which puts a message on WMQ "Q1". There is some other application which reads the message from "Q1" and then processes the message and places the response on the queue specified in "ReplyToQ" available on MQ Header information.
I am not able to find a command parameter to add the ReplyToQ in the message WTX map is placing on "Q1".
Any thoughts?
Thanks for taking the time to look at this question and helping out!
The ReplyToQueue is sent on the message header, so you must get the message with the header (-HDR) and parse it from there on your input card.
Here's the doc about the adapter's command:
http://pic.dhe.ibm.com/infocenter/wtxdoc/v8r4m1/index.jsp
You should have a type tree on the Examples to read the message with a header.
Regards,
Bruno.
I would truly appreciate some help with developing a simple pub/sub flow using message broker 7.0 and MQ 7.0
My flow is supposed to accept a certain message with no header, filter it based on
the field (process, if the value is "yes")
And then publish the body to all the queues, listed in nodes of the message
<pub>
<header>
<topics>
<topic> Topic1 </topic>
<topic> Topic2 </topic>
</topics>
<properties>
<property>
<publish>yes</publish>
</property>
</properties>
</header>
<body>
<a>
<b>The publication </b>
</a>
</body>
</pub>
This is my flow:
I registered a Topic and a Subscription in MQ, but I'm pretty much lost what am I supposed to do next.
I used RFHUtil for testing point-to-point application,
but have no idea, how to make use of the it while developing publish-subscribe.
questions:
1. Is it correct to use just a simple queue as a publisher ( in MQ input i just set "IN", the queue, i have in MQ)
2. How do I register subscriber/multiple subscribers in this flow? What is the subscription point?
It's just a learning task.
Any help is welcome!
For a normal pub-sub flow you can have something like below:
Set, the queue name of MQInput node to your input queue. Let's name it as "inputQ".
Now the message has been read by MQInput node from the "inputQ" and it has been passed on to the compute node.
In the compute node you need to set the message type to publish and also set the topic name, before you pass it to the Publication node.
You can use below code for the same:
SET OutputRoot.MQRFH2.psc.Command = 'Publish';
SET OutputRoot.MQRFH2.psc.Topic = 'YourTopicString';
"How do I register subscriber/multiple subscribers in this flow? "
I am assuming your problem is "How to publish message with different topics from same flow".
Now suppose you have multiple topics to be published from the same flow. You can't do it all at once. One message can have one topic.
But, you can achieve it as below (Suppose you have 3 topics):
SET OutputRoot.MQRFH2.psc.Command = 'Publish';
SET OutputRoot.MQRFH2.psc.Topic = 'Topic1';
PROPAGATE TO TERMINAL 'out' DELETE NONE;
SET OutputRoot.MQRFH2.psc.Topic = 'Topic2';
PROPAGATE TO TERMINAL 'out' DELETE NONE;
SET OutputRoot.MQRFH2.psc.Topic = 'Topic3';
PROPAGATE TO TERMINAL 'out' DELETE NONE;
RETURN FALSE;
However, if your requirement is publishing single topic but multiple queues should pick it up, then its simpler.
You just need to create subscriptions for all those queues for your topic.
We are attempting to consolidate the DLQs across the board in our enterprise, into a single Q (an Enterprise_DLQ if you will...). We have a mix of QMs on various platforms - Mainframe, various Unix flavours - Linux,AIX,Solaris etc., Windows, AS/400....
The idea was to configure the DLQ on the QM (set the DEADQ attribute on the QM) to that of the ENTERPRISE_DLQ which is a Cluster Q. All the QMs in the Enterprise are members of the Cluster. This approach, however does not seem to work when we tested it.
I have tested this by setting up a simple Cluster with 4 QMs. On one of the QM, defined a QRemote to a non-existent QM and non-existent Q, but a valid xmitq and configure the requsite SDR chl between the QMs as follows:
QM_FR - Full_Repos
QM1, QM2, QM3 - members of the Cluster
QM_FR hosts ENTERPRISE_DLQ which is advertised to the Cluster
On QM3 setup the following:
QM3.QM1 - sdr to QM1, ql(QM1) with usage xmitq, qr(qr.not_exist) rqmname(not_exist) rname(not_exist) xmitq(qm1), setup QM1 to trigger-start QM3.QM1 when a msg arrives on QM1
On QM1:
QM3.QM1 - rcvr chl, ql(local_dlq), ql(qa.enterise_dlq), qr(qr.enterprise.dlq)
Test 1:
Set deadq on QM1 to ENTERPRISE_DLQ, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - ENTERPRISE_DLQ!!
ql(qm1) curdepth(1)
Test 2:
Set deadq on QM1 to qr.enterprise.dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - qr.enterprise.dlq (all caps)!!
ql(qm1) curdepth(2)
Test 3:
Set deadq on QM1 to qa.enterise_dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - qa.enterise_dlq (all caps)!!
ql(qm1) curdepth(3)
Test 4:
Set deadq on QM1 to local_dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RUNNING, all msgs on QM3 ql(QM1) make it to local_dlq on QM3.
ql(qm1) curdepth(0)
Now the question: Looks like the DLQ on a QM must be a local queue. Is this a correct conclusion? If not, how can I make all the DLQs msg go to a single Q - Enterprise_DLQ above?
One obvious solution is to define a trigger on local_dlq on QM3 (and do the same on others QMs) which will read the msg and write it to the Cluster Q - ENTERPRISE_DLQ. But this involves additional moving parts - trigger, trigger monitor on each QM. It is most desirable to be able to configure a Cluster Q/QRemote/QAlias to be a DLQ on the QM. Thoughts/ideas???
Thanks
-Ravi
Per the documentation here:
A dead-letter queue has no special requirements except that:
It must be a local queue
Its MAXMSGL (maximum message length) attribute must enable the queue to accommodate the largest messages that the queue manager has
to handle plus the size of the dead-letter header (MQDLH)
The DLQ provides a means for a QMgr to handle messages that a channel was unable to deliver. If the DLQ were not local then the error handling for channels would itself be dependent on channels. This would present something of an architectural design flaw.
The prescribed way to do what you require is to trigger a job to forward the messages to the remote queue. This way whenever a message hits the DLQ, the triggered job fires up and forwards the messages. If you didn't want to write such a program, you could easily use a bit of shell or Perl code and the Q program from SupportPac MA01. It would be advisable that the channels used to send such messages off the QMgr would be set to not use the DLQ. Ideally, these would exist in a dedicated cluster so that DLQ traffic did not mix with application traffic.
Also, be aware that one of the functions of the DLQ is to move messages out of the XMitQ if a conversion error prevents them from being sent. Forwarding them to a central location would have the effect of putting them back onto the cluster XMitQ. Similarly, if the destination filled up, these messages would also sit on the sending qMgr's cluster XMitQ. If they built up there in sufficient numbers, a full cluster XMitQ would prevent all cluster channels from working. In that event you'd need some kind of tooling to let you selectively delete or move messages out of the cluster XMitQ which would be a bit challenging.
With all that in mind, the requirement would seem to present more challenges than it solves. Recommendation: error handling for channels is best handled without further use of channels - i.e. locally.