I have two clusters CLUSMES and CLUSHUBS. Each cluster has two queue managers.
Cluster CLUSMES has QMGRS: QMGR1A and QMGR1B
Cluster CLUSHUBS has QMGRS: QMGR3A and QMGR3B
There is a Gateway QMGR: QMGR2, which forms the overlap and is a partial repository in each MQ cluster.
Request messages are sent out from either QMGR1A/B to either QMGR3A/B via QMGR2 which acts as a cluster Load balance to QMGR3A/B (This works fine) and a reply is expected back to the sending QMGR.
All channel connectivity is in place and fully functional. The problem is how to return the message from where it came from. The replying QMGR connects to QMGR3A/B and issues a put. I will either get a REMOTE_QMGR not found (MQRC 2087) or a MQ Object not found (MQRC 2085) depending on how I have it configured. The message header of the message contains the ReplytoQueue and ReplyToQMgr properly. I would like to have the replying application just issue a put and have it delivered to the proper queue in CLUSMES, but this is proving to be extremely difficult. I have played with Remote QMGR Alias and QAlias on the GateWay Qmgr: QMGR2, but no luck. There is got to be a simple trick to this and there are plenty of examples, but I have not been able to implement one successfully. A clear cut example of what my return path should be would be most helpful. Keep in mind, that the ReplyToQMgris in the MQMD and resolution needs to occur from that. I need resolution to occur at the QMGR2 level, where both clusters are known. Concrete full suggestions appreciated.
MQ Definitions on the QMGR1A/B, where the REPLY is expected:
DEFINE QLOCAL('SERVER.REPLYQ') CLUSTER('CLUSMES')
On QMGR2 ( The Gateway for message hoping)
DEFINE NAMELIST(CLUSTEST) NAMES(CLUSMES,CLUSHUBS)
DEFINE QALIAS(SERVER.ALIAS.REPLYQ) TARGQ(SERVER.REPLYQ) CLUSTER(CLUSTEST) DEFBIND(NOTFIXED)
DEFINE QREMOTE(QMGR1A) RNAME(' ') RQMNAME(QMGR1A) XMITQ('') CLUSTER(CLUSHUBS)
DEFINE QREMOTE(QMGR1B) RNAME(' ') RQMNAME(QMGR1B) XMITQ('') CLUSTER(CLUSHUBS)
On MQMGR3A/B QALIAS(SERVER.ALIAS.REPLYQ) cluster queue. The Gateway QMGR could not resolve the baseQ: mqrc_unknown_alias_base_q 2082
This was the configuration when trying to resolve it using the cluster.
When the request message is sent by the application it would specify ReplyToQMgr of either QMGR1A and QMGR1B and ReplytoQueue with the name of the queue that is present on QMGR1A and QMGR1B, the reply queue need not be clustered.
On gateway queue manager QMGR2 you would define the following objects:
DEFINE QREMOTE(QMGR1A) RNAME('') RQMNAME(QMGR1A) XMITQ('') CLUSTER(CLUSHUBS)
DEFINE QREMOTE(QMGR1B) RNAME('') RQMNAME(QMGR1B) XMITQ('') CLUSTER(CLUSHUBS)
This would allows any queue manager in the cluster CLUSHUBS to route reply messages back to QMGR1A and QMGR1B via gateway queue manager QMGR2.
If you want to limit queues on QMGR1A and QMGR1B that queue managers in the CLUSHUBS cluster can put to you would need to take a different approach. Let me know if that is what you need and I will update my answer with some suggestions.
Related
I'd like set up a battery of tests for testing new installations of QMs. Most of them will be in a "remote" setup. I'd like to be able to send a message to the new QM and receive a response. Is there any way to make a "loop" just with normal configuration (i.e. without using exits) ?
My former employer was a bank and we had many connections to clearing
houses and other vendors. On many of these I was able to have the business partner set up loopback queues on their end. These consisted of a QRemote on my QMgr that pointed to a QRemote on the other side, which pointed to a QLocal on the originating QMgr on my side. When a message is PUT to the QRemote it flowed to the other side, landed on the QRemote there, then was directed back to the QLocal.
If the message didn't make it back, we would then start our inbound channel (which was defined as a RQSTR on our side). If this caused the channel to start and the message arrived then we knew the channel initiator on the other side was down but at least we could keep the channel running until it was fixed. But if this didn't work we knew there was some problem on their side other than channel triggering.
And, yes, this was production. We figured that non-transactional instrumentation messages are valid in Production environments. As far as we were concerned, this was the same as a ping or cluster control message. Of course, we had a web screen and other instrumentation to kick them off so usage of the utility was controlled and were recorded.
There are similar methods in which QMgr aliasing is used and this can greatly cut down on the number of loopback queues needed in a large network. Do NOT be tempted to do this! When we use QRemotes that specify a fully-qualified destination, all they can do is send and receive loopback messages on the prescribed queue. But if a QRemote is used without an RQNAME then it can be used to send messages to any queue on the adjacent QMgr.
Let's say you have QMA and QMB:
runmqsc QMA
DEFINE QLOCAL(QMB.LOOPBACK) REPLACE
DEFINE QREMOTE(QMB.LOOPBACK.RMT) +
XMITQ(QMB) +
RQMNAME(QMB) +
RNAME(QMB.QMA.LOOPBACK.RMT) +
REPLACE
END
runmqsc QMB
DEFINE QREMOTE(QMB.QMA.LOOPBACK.RMT) +
XMITQ(QMA) +
RQMNAME(QMA) +
RNAME(QMB.LOOPBACK) +
REPLACE
Now you can go to the server hosting QMA and do:
echo Hello World | amqsput QMB.LOOPBACK.RMT QMA
Note that the name of the loopback reflector on QMB has both QMgr names in it. This is because you probably want to set up lopback from both sides and cannot use QMB.LOOPBACK.RMT both as a reflector and to initiate loopbacks. In that case the object inventory looks more like this:
runmqsc QMA
DEFINE QLOCAL(QMB.LOOPBACK) REPLACE
DEFINE QREMOTE(QMB.LOOPBACK.RMT) +
XMITQ(QMB) +
RQMNAME(QMB) +
RNAME(QMB.QMA.LOOPBACK.RMT) +
REPLACE
DEFINE QREMOTE(QMA.QMB.LOOPBACK.RMT) +
XMITQ(QMB) +
RQMNAME(QMB) +
RNAME(QMA.LOOPBACK) +
REPLACE
END
runmqsc QMB
DEFINE QREMOTE(QMB.QMA.LOOPBACK.RMT) +
XMITQ(QMA) +
RQMNAME(QMA) +
RNAME(QMB.LOOPBACK) +
REPLACE
DEFINE QLOCAL(QMA.LOOPBACK) REPLACE
DEFINE QREMOTE(QMA.LOOPBACK.RMT) +
XMITQ(QMA) +
RQMNAME(QMA) +
RNAME(QMA.QMB.LOOPBACK.RMT) +
REPLACE
END
Note that all of the objects sort out based on the name of the remote QMgr. Some people prefer names like LOOPBACK.QMB.RMT to make all the loopback queues cluster together in a list of objects or a backup.
All of this is a great target for automation since the names of the objects can all be derived from the names of the QMgrs.
I can think of a couple of ways to do this. It rather depends on whether you want to include the MQPUT and MQGET on the remote system target queue.
Define the target queue as a QREMOTE pointing back to the originating system again with a real QLOCAL as the final destination back on the original system.
Use something like SupportPac MA01 (Q program) to echo the message back when it lands on a real QLOCAL queue on the remote system. The -e option let's you echo the message back to the ReplyToQ and ReplyToQMgr fields.
In TIBCO EMS, which I am familiar with, there is a feature called "destination bridges".
Queues and Topic can be bridged (linked) so that the 2nd destination can become client of the the first. (Queue to queue, Topic to queue, Queue to topic, topic to topic)
For instance, a topic can be bridged to a queue, which in essence will become a durable subscriber of the messages submitted to the topic. Clients can subscribe to the topic OR read from the queue. This example is a way to load balance the reading of a pub/sub for multiple clients (readers of the queue).
This "bridge" feature can also involve message selectors and destination name wilcards.
So, a QUEUE X can be the client of TOPIC.* with condition CUST_ID(a JMS attribute)>30.
In that case, all message submitted to TOPIC.A OR TOPIC.B fitting the criteria would end up in QUEUE X. All this does not involve anything but simple EMS configuration.
I don't know enough about Websphere MQ, and I need a similar behavior. Will I have to develop a processing program outside of MQ, or can the feature within the product sufice ?
Note : I have already gone through MQ documentation and found about the "Alias queues" feature. Since the feature should really be called "Shortcut queue" and does not involve 2 destinations... I don't think it could help me...
Thanks!
Edit : For reference, the command (DEF SUB) enabling this in MQ is documented here
Edit 2 : The selected answer cover the "Topic -> Queue" pattern from TIBCO EMS "Destination bridge" featuire. Please note that the "Q->Q", T->T and Q->T" patterns are not covered here.
Easy! Define your queue to receive the subscription and then define a durable administrative subscription.
DEF QL(MY.SUSCRIBER.QUEUE)
DEF SUB('MY.SUBSCRIPTION') +
TOPICSTR('SOME/TOPIC/#') +
DEST('MY.SUSCRIBER.QUEUE') +
SELECTOR('JMSType = 'car' AND color = 'blue' AND weight > 2500') +
REPLACE
The Infocenter has a section on Selector Syntax and a page for the DEFINE SUB command.
Fairly new to zeromq and trying to get a basic pub/sub to work. When I run the following (sub starting before pub) the publisher finishes but the subscriber hangs having not received all the messages - why ?
I think the socket is being closed but the messages have been sent ? Is there a way of ensuring all messages are received ?
Publisher:
import zmq
import random
import time
import tnetstring
context=zmq.Context()
socket=context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
y=0
for x in xrange(5000):
st = random.randrange(1,10)
data = []
data.append(random.randrange(1,100000))
data.append(int(time.time()))
data.append(random.uniform(1.0,10.0))
s = tnetstring.dumps(data)
print 'Sending ...%d %s' % (st,s)
socket.send("%d %s" % (st,s))
print "Messages sent: %d" % x
y+=1
print '*** SERVER FINISHED. # MESSAGES SENT = ' + str(y)
Subscriber :-
import sys
import zmq
import tnetstring
# Socket to talk to server
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
filter = "" # get all messages
socket.setsockopt(zmq.SUBSCRIBE, filter)
x=0
while True:
topic,data = socket.recv().split()
print "Topic: %s, Data = %s. Total # Messages = %d" % (topic,data,x)
x+=1
In ZeroMQ, clients and servers always try to reconnect; they won't go down if the other side disconnects (because in many cases you'd want them to resume talking if the other side comes up again). So in your test code, the client will just wait until the server starts sending messages again, unless you stop recv()ing messages at some point.
In your specific instance, you may want to investigate using the socket.close() and context.term(). It will block until all the messages have been sent. You also have the problem of a slow joiner. You can add a sleep after the bind, but before you start publishing. This works in a test case, but you will want to really understand what is the solution vs a band-aid.
You need to think of the PUB/SUB pattern like a radio. The sender and receiver are both asynchronous. The Publisher will continue to send even if no one is listening. The subscriber will only receive data if it is listening. If the network goes down in the middle, the data will be lost.
You need to understand this in order to design your messages. For example, if you design your messages to be "idempotent", it doesn't matter if you lose data. An example of this would be a status type message. It doesn't matter if you have any of the previous statuses. The latest one is correct and message loss doesn't matter. The benefits to this approach is that you end up with a more robust and performant system. The downsides are when you can't design your messages this way.
Your example includes a type of message that requires no loss. Another type of message would be transactional. For example, if you just sent the deltas of what changed in your system, you would not be able to lose the messages. Database replication is often managed this way which is why db replication is often so fragile. To try to provide guarantees, you need to do a couple things. One thing is to add a persistent cache. Each message sent needs to be logged in the persistent cache. Each message needs to be assigned a unique id (preferably a sequence) so that the clients can determine if they are missing a message. A second socket (ROUTER/REQ) needs to be added for the client to request the missing messages individually. Alternatively, you could just use the secondary socket to request resending over the PUB/SUB. The clients would then all receive the messages again (which works for the multicast version). The clients would ignore the messages they had already seen. NOTE: this follows the MAJORDOMO pattern found in the ZeroMQ guide.
An alternative approach is to create your own broker using the ROUTER/DEALER sockets. When the ROUTER socket saw each DEALER connect, it would store its ID. When the ROUTER needed to send data, it would iterate over all client IDs and publish the message. Each message should contain a sequence so that the client can know what missing messages to request. NOTE: this is a sort of reimplementation of Kafka from linkedin.
I would truly appreciate some help with developing a simple pub/sub flow using message broker 7.0 and MQ 7.0
My flow is supposed to accept a certain message with no header, filter it based on
the field (process, if the value is "yes")
And then publish the body to all the queues, listed in nodes of the message
<pub>
<header>
<topics>
<topic> Topic1 </topic>
<topic> Topic2 </topic>
</topics>
<properties>
<property>
<publish>yes</publish>
</property>
</properties>
</header>
<body>
<a>
<b>The publication </b>
</a>
</body>
</pub>
This is my flow:
I registered a Topic and a Subscription in MQ, but I'm pretty much lost what am I supposed to do next.
I used RFHUtil for testing point-to-point application,
but have no idea, how to make use of the it while developing publish-subscribe.
questions:
1. Is it correct to use just a simple queue as a publisher ( in MQ input i just set "IN", the queue, i have in MQ)
2. How do I register subscriber/multiple subscribers in this flow? What is the subscription point?
It's just a learning task.
Any help is welcome!
For a normal pub-sub flow you can have something like below:
Set, the queue name of MQInput node to your input queue. Let's name it as "inputQ".
Now the message has been read by MQInput node from the "inputQ" and it has been passed on to the compute node.
In the compute node you need to set the message type to publish and also set the topic name, before you pass it to the Publication node.
You can use below code for the same:
SET OutputRoot.MQRFH2.psc.Command = 'Publish';
SET OutputRoot.MQRFH2.psc.Topic = 'YourTopicString';
"How do I register subscriber/multiple subscribers in this flow? "
I am assuming your problem is "How to publish message with different topics from same flow".
Now suppose you have multiple topics to be published from the same flow. You can't do it all at once. One message can have one topic.
But, you can achieve it as below (Suppose you have 3 topics):
SET OutputRoot.MQRFH2.psc.Command = 'Publish';
SET OutputRoot.MQRFH2.psc.Topic = 'Topic1';
PROPAGATE TO TERMINAL 'out' DELETE NONE;
SET OutputRoot.MQRFH2.psc.Topic = 'Topic2';
PROPAGATE TO TERMINAL 'out' DELETE NONE;
SET OutputRoot.MQRFH2.psc.Topic = 'Topic3';
PROPAGATE TO TERMINAL 'out' DELETE NONE;
RETURN FALSE;
However, if your requirement is publishing single topic but multiple queues should pick it up, then its simpler.
You just need to create subscriptions for all those queues for your topic.
We are attempting to consolidate the DLQs across the board in our enterprise, into a single Q (an Enterprise_DLQ if you will...). We have a mix of QMs on various platforms - Mainframe, various Unix flavours - Linux,AIX,Solaris etc., Windows, AS/400....
The idea was to configure the DLQ on the QM (set the DEADQ attribute on the QM) to that of the ENTERPRISE_DLQ which is a Cluster Q. All the QMs in the Enterprise are members of the Cluster. This approach, however does not seem to work when we tested it.
I have tested this by setting up a simple Cluster with 4 QMs. On one of the QM, defined a QRemote to a non-existent QM and non-existent Q, but a valid xmitq and configure the requsite SDR chl between the QMs as follows:
QM_FR - Full_Repos
QM1, QM2, QM3 - members of the Cluster
QM_FR hosts ENTERPRISE_DLQ which is advertised to the Cluster
On QM3 setup the following:
QM3.QM1 - sdr to QM1, ql(QM1) with usage xmitq, qr(qr.not_exist) rqmname(not_exist) rname(not_exist) xmitq(qm1), setup QM1 to trigger-start QM3.QM1 when a msg arrives on QM1
On QM1:
QM3.QM1 - rcvr chl, ql(local_dlq), ql(qa.enterise_dlq), qr(qr.enterprise.dlq)
Test 1:
Set deadq on QM1 to ENTERPRISE_DLQ, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - ENTERPRISE_DLQ!!
ql(qm1) curdepth(1)
Test 2:
Set deadq on QM1 to qr.enterprise.dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - qr.enterprise.dlq (all caps)!!
ql(qm1) curdepth(2)
Test 3:
Set deadq on QM1 to qa.enterise_dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RETRYING, QM1 error logs complain about not being able to MQOPEN the Q - qa.enterise_dlq (all caps)!!
ql(qm1) curdepth(3)
Test 4:
Set deadq on QM1 to local_dlq, write a msg to QR.NOT_EXIST on QM3
Result: Msg stays put on QM1, QM3.QM1 is RUNNING, all msgs on QM3 ql(QM1) make it to local_dlq on QM3.
ql(qm1) curdepth(0)
Now the question: Looks like the DLQ on a QM must be a local queue. Is this a correct conclusion? If not, how can I make all the DLQs msg go to a single Q - Enterprise_DLQ above?
One obvious solution is to define a trigger on local_dlq on QM3 (and do the same on others QMs) which will read the msg and write it to the Cluster Q - ENTERPRISE_DLQ. But this involves additional moving parts - trigger, trigger monitor on each QM. It is most desirable to be able to configure a Cluster Q/QRemote/QAlias to be a DLQ on the QM. Thoughts/ideas???
Thanks
-Ravi
Per the documentation here:
A dead-letter queue has no special requirements except that:
It must be a local queue
Its MAXMSGL (maximum message length) attribute must enable the queue to accommodate the largest messages that the queue manager has
to handle plus the size of the dead-letter header (MQDLH)
The DLQ provides a means for a QMgr to handle messages that a channel was unable to deliver. If the DLQ were not local then the error handling for channels would itself be dependent on channels. This would present something of an architectural design flaw.
The prescribed way to do what you require is to trigger a job to forward the messages to the remote queue. This way whenever a message hits the DLQ, the triggered job fires up and forwards the messages. If you didn't want to write such a program, you could easily use a bit of shell or Perl code and the Q program from SupportPac MA01. It would be advisable that the channels used to send such messages off the QMgr would be set to not use the DLQ. Ideally, these would exist in a dedicated cluster so that DLQ traffic did not mix with application traffic.
Also, be aware that one of the functions of the DLQ is to move messages out of the XMitQ if a conversion error prevents them from being sent. Forwarding them to a central location would have the effect of putting them back onto the cluster XMitQ. Similarly, if the destination filled up, these messages would also sit on the sending qMgr's cluster XMitQ. If they built up there in sufficient numbers, a full cluster XMitQ would prevent all cluster channels from working. In that event you'd need some kind of tooling to let you selectively delete or move messages out of the cluster XMitQ which would be a bit challenging.
With all that in mind, the requirement would seem to present more challenges than it solves. Recommendation: error handling for channels is best handled without further use of channels - i.e. locally.