I have a repository queue manager say REPQMGR in a WebSphere MQ cluster. What mqsc command should i use on REPQMGR to get all the queue manager's list in the cluster. In specific i need to inquire a property value from all queue managers.
runmqsc
DIS CLUSQMGR(*) CLUSTER(clusname) ALL
This command will show you all the QMgrs in the cluster. You can also specify a particular attribute that you want to know about instead of using ALL.
If you are writing script, have a look at SupportPac MO72. This is a version of runmqsc that a) can operate as a client; b) has a number of useful formatting options; and c) also managed CCDT files.
In particular, you can tell MO72 to return results on a single line rather than the standard runmqsc two-column format. This means that a script can look for the object name and attribute name on the same output line rather than having to parse many lines for a single object.
Related
I have a requirement, where I am publishing data to the IBM MQ and at around 10pm we run a scheduler that should consume only those messages that are published before 10pm. The messages that are published on or after 10pm should be picked next day.
Is there a way in IBM MQ where we can filter it based on the datetime and consume? Any suggestion?
Thanks in advance.
What you'd really like to be able to do is use a message selector something like this:-
Root.MQMD.PutDate = '20210924' AND Root.MQMD.PutTime < '22000000'
However, unfortunately, the only operators you are allowed to use with strings (and both of these fields are string fields) are = and <> (not-equals) (see IBM MQ Docs: Message selector syntax).
Alternatively, you could make use of QLOAD which can select messages based on time, and have it pre-process the queue by moving all the messages you should be processing to another queue and then allow your application to work on that entire queue. You could invoke QLOAD to do this as follows:-
qload -m QM1 -i INPUT.Q -Tt22:00 -o APPL.INPUT.Q
This command will read messages from a queue called INPUT.Q and will only move those that were put today before 22:00 (10pm) to the queue called APPL.INPUT.Q.
Of course, QLOAD is not doing anything here that you couldn't write into your own application. Just inspect the PutDate and PutTime fields in the MQMD and use those to decide whether to process the message or not.
You can try to use a selector expression in the listener to filter by time.
This is an example adapted from the documentation on how to use a selector expression:
<flow name="JMSConnectorPublish">
<jms:listener config-ref="JMS_Config" destination="in" selector="JMSPriority=9"/>
</flow>
It doesn't use the time. You'll need to be familiar with the headers and attributes from IBM MQ/JMS messages to find the right expression. Probably attributes.headers.JMSTimestamp is going to be of interest to you.
I'm currently faced with a use case where I need to process multiple messages in parallel, but related messages should only be processed by one JMS consumer at a time.
As an example, consider the messages ABCDEF and DEFGHI, followed by a sequence number:
ABCDEF-1
ABCDEF-2
ABCDEF-3
DEFGHI-1
DEFGHI-2
DEFGHI-3
If possible, I'd like to have JMS consumers process ABCDEF and DEFGHI messages in parallel, but never two ABCDEF or DEFGHI messages at the same time across two or more consumers. In my use case, the ordering of messages is irrelevant. I cannot use JMS filters because I would not know the group name ahead of time, and having a static list of group names is not feasible.. Messages are sent via a system which is not under my control, and the group name always consists of 6 letters.
ActiveMQ seems to have implemented this via their message groups feature, but I can't find equivalent functionality in IBM MQ. My understanding is that this behaviour is driven by JMSXGroupId and JMSXGroupSeq headers, which are only defined in an optional part of the JMS specification.
As a workaround, I could always have a staging ground (a database perhaps), where all messages are placed and then have a fast poll on this database, but adding an extra piece of infrastructure seems overkill here. Moreover, it would also not allow me to use JMS transactions for reprocessing in case of failures.
To me this seems like a common use case in messaging architecture, but I can't find a simple yes/no answer anywhere online, and the IBM MQ documentation isn't very clear about whether this functionality is supported or not.
Thanks in advance
IBM MQ also has the concept of message groups.
The IBM MQ message header, called the Message Descriptor (MQMD) has a couple of fields in it that the JMS Group fields map to:-
JMSXGroupID -> MQMD GroupID field
JMSXGroupSeq -> MQMD MsgSeqNumber field
Read more here
IBM MQ docs: Mapping JMS property fields
IBM MQ docs: Message groups
I am trying to write a consumer for an existing queue.
RabbbitMQ is running in a separate instance and queue named "org-queue" is already created and binded to an exchange. org-queue is a durable queue and it has some additional properties as well.
Now I need to receive messages from this queue.
I have use the below code to get instance of the queue
conn = Bunny.new
conn.start
ch = conn.create_channel
q = ch.queue("org-queue")
It throws me an error stating different durable property. It seems by default the Bunny uses durable = false. So I've added durable true as parameter. Now it states the difference between other parameters. Do I need to specify all the parameters, to connect to it? As rabbitMQ is maintained by different environment, it is hard for me to get all the properties.
Is there a way to get list of queues and listening to the required queue in client instead of connecting to a queue by all parameters.
Have you tried the :passive=true parameter on queue()? A real example is the rabbitmq plugin of logstash. :passive means to only check queue existence rather than to declare it when consuming messages from it.
Based on the documentation here http://reference.rubybunny.info/Bunny/Queue.html and
http://reference.rubybunny.info/Bunny/Channel.html
Using the ch.queues() method you could get a hash of all the queues on that channel. Then once you find the instance of the queue you are wanting to connect to you could use the q.options() method to find out what options are on that rabbitmq queue.
Seems like a round about way to do it but might work. I haven't tested this as I don't have a rabbitmq server up at the moment.
Maybe there is way to get it with rabbitmqctl or the admin tool (I have forgotten the name), so the info about queue. Even if so, I would not bother.
There are two possible solutions that come to my mind.
First solution:
In general if you want to declare an already existing queue, it has to be with ALL correct parameters. So what I'm doing is having a helper function for declaring a specific queue (I'm using c++ client, so the API may be different but I'm sure concept is the same). For example, if I have 10 subscribers that are consuming queue1, and each of them needs to declare the queue in the same way, I will simply write a util that declares this queue and that's that.
Before the second solution a little something: Maybe here is the case in which we come to a misconception that happens too often :)
You don't really need a specific queue to get the messages from that queue. What you need is a queue and the correct binding. When sending a message, you are not really sending to the queue, but to the exchange, sometimes with routing key, sometimes without one - let's say with. On the receiving end you need a queue to consume a message, so naturally you declare one, and bind it to an exchange with a routing key. You don't need even need the name of the queue explicitly, server will provide a generic one for you, so that you can use it when binding.
Second solution:
relies on the fact that
It is perfectly legal to bind multiple queues with the same binding
key
(found here https://www.rabbitmq.com/tutorials/tutorial-four-java.html)
So each of your subscribers can delcare a queue in whatever way they want, as long as they do the binding correctly. Of course these would be different queues with different names.
I would not recommend this. This implies that every message goes to two queues for example and most likely a message (I am assuming the use case here needs to be processed only once by one subscriber).
What are the recommended guidelines for WebSphere MQ naming conventions for queue managers, queues (local, remote, transmit, dead letter queues...), channels etc. I found one at IBM's developerWorks but looking to see if there is anything else comprehensive out there. Thanks.
This sounds like a good topic for a new Mission:Messaging column but I'll write up the condensed version here. I'll preface my answer by noting that many of my recommendations are contrary to those you might find elsewhere. In some cases this is because the way MQ is commonly used have changed over the years. In other cases it is because the conventional wisdom never worked well to begin with. (Cluster channels named TO.QMGR for example.) In all cases, I prefer conventions which apply to the broadest number of situations. That means it is usually possible to find exceptions to these rules for specific cases but they are broadly applicable nonetheless.
Some general rules
The following apply to all object types.
Use the dot character . as a separator.
Authorization rules parse names using dot characters as separators. For example, the queue name MY.EXAMPLE.QUEUE.NAME would match rules like MY.*.*.* or MY.** but not MY.* because the dots signify name node separator characters. Do yourself a favor and use dots rather than underscores as your naming node separators consistently.
Use machine-parsable names.
When you have 5 queue managers and a few hundred objects, you can easily get by doing all your management manually with WMQ Explorer or runmqsc. However, there comes a point where consistency, reliability, repeatability and efficiency demand that you script up some of your routine operations or employ instrumentation to respond to network events. More than anything else this means eliminating ambiguity in names.
For example, if you create a naming convention that channel names must look like SRCQMGR.DESTQMGR then it is possible for a script to read a RCVR or SDR channel name and derive the names of the two queue managers it connects. However, what does the script do with a channel name like GA.PAYROLL.OPS? Is it the GA.PAYROLL queue manager connecting to the OPS queue manager? Or is it the GA queue manager connecting to the PAYROLL.OPS queue manager? A human might be able to tell instantly based on context but scripts are notorious for doing what you tell them rather than what you intended. Similar situations arise when queue names have position-dependent qualifiers at both the beginning and the end of the name and a variable number of nodes.
Stick with UPPERCASE names.
This is for compatibility across all platforms, and in particular z/OS. Although it is true that more z/OS shops are using mixed case, it is also true that there are still a lot of systems out there that only accept UPPERCASE names. While it is easy to say "this doesn't apply to me" I have seen many cases where somebody had problems interfacing to a new business partner because of incompatible names. After all, the ability to interface to just about any platform out there is one of the main reasons for using WMQ in the first place.
Don't include attributes of the object in the name
In an SOA world, queues and topics are different types of destination and often are interchangeable. Something putting messages to what it thinks is a queue doesn't necessarily know (or care!) if they are actually going to a queue or a topic. A queue that has an application listening on it for messages may be fed by an administrative subscription that is actually roping in publications from one or more topics.
What we really care about is the nature of the messages - what function do they perform - not whether we are connecting to a local queue or an alias queue. So adding qualifiers like .QA, .QL, .TPC (for topic) and so forth doesn't make sense. Similarly, adding .RCVR onto a channel name sucks up 5 useful characters that could have been better used describing the QMgr name. Worse, these practices bake the topology into the object names, making the system both less flexible and more brittle.
Channel names
Point-to-Point Channel names
Use names like SRCQMGR.DESTQMGR for RCVR, RQSTR, SDR, and SVR channels. This is biased toward languages that read left-to-right because the intention is to describe the data flow from a QMgr to another QMgr
Cluster channels
Use names like CLUSNAME.QMNAME. The old wisdom said to use names like TO.QMNAME but if you ever implement overlapping clusters this causes the same channel to be used for multiple clusters. That's bad because you can then never perform maintenance on one cluster without impacting the other. Using CLUSNAME.QMNAME insures that every QMgr has a dedicated CLUSRCVR channel for each cluster in which it participates.
Client Channels
The exception to the "don't include attributes of the object in the name" is arguably the SVRCONN channel. This is because channels are very much tied to the physical rather than the logical layer of the network. So putting the QMgr name in a SVRCONN channel name is generally OK. I don't object too strongly if people want to add .SVRCONN at the end, either.
The thing to remember about client channels is that if you use a Client Channel Definition Table (CCDT) then the unique index into that table is the channel name. That means you cannot have the same channel name on multiple QMgrs and still use a CCDT. Since the CCDT is one way to configure the SSL/TLS channel details, this is often not appreciated to its fullest until the "let's finally secure WMQ" project comes along. By using unique channel names for SVRCONN channels from the start, you can future-proof the network. Usually these names look like APP.QMNAME or to make it obvious you aren't dealing with a cluster or point-to-point channel, APP.QMNAME.SVRCONN or similar.
Queue manager names
No dots in QMgr names
One implication of the previous rules is that cluster and queue manager names must contain only one node and therefore should never contain a . character. This is because channel names are typically derived from the cluster and/or QMgr names. So in the example above, a RCVR channel name like GA_PAYROLL.OPS would tell both humans and scripts that the channel in question connects a QMgr named GA_PAYROLL to a QMgr named OPS.
Names of 9 chars or less
Channel names can be only 20 characters. Subtract one for the dot separator, divide by two and round down gets you to 9 characters max for queue managers. If there's a possibility that you may set up different channels for classes of service (for large vs small messages, for example, then drop back to 8 chars or less for QMgr names. This leads to QM1.QM2.A, QM1.QM2.B, etc.
QMgr names reflect the physical layer
In a service-oriented world, we really care about destination names like queues and topics. We care much less about queue manager names because these are just life support for queues and topics. Client apps don't care so much about which QMgr the connect to, so long as they can send requests and receive replies. WMQ very conveniently fills in the reply-to QMgr name on outbound requests so it is rare that an application needs to know about it.
On the other hand, the administrators need to know about the QMgr name. In the early days it was common to name QMgrs for the host server. Later it became the fashion to name them for the applications they hosted. Now in the SOA world, messaging is infrastructure and usually not associated with any single application so the pendulum has swung back. Give the QMgr a unique name that is meaningful to an administrator.
Never reuse QMgr names!
It is unfortunately very common to "move" a QMgr from one place to another or to have a primary and disaster recovery QMgr with the same name. This practice usually means some part of the application is dependent on the QMgr name and therefore it is "easier" to reuse the name. IBM introduced the QMID as a way to address some of the problems introduced by reusing QMgr names. The typical use case is that a node gets rebuilt and the QMgr once there is also rebuilt from scratch. The cluster knows it is a new QMgr because the QMID has changed, but the name used for routing and other operations remains the same.
Although this helped in that limited use case, it doesn't address the issues when both QMgrs with the same name are online at the same time. Nor does it address the problem that a reputable Certificate Authority won't issue multiple certs with the same Distinguished Name which forces reuse of the same certificate for multiple QMgrs.
Remember that QMgrs are just life support for queues and topics and ideally will be anonymous to the applications using them. Pick a naming convention that allows you to spin up new QMgrs with unique names by the hundreds or thousands, if necessary, so that you don't have to reuse QMgr names.
Other objects
Use intention-revealing names
Or to put it another way, name the object for what it does and not what it is. For example, if you were in the habit (as many people are) of including qualifiers like .QL for local queues and .QA for aliases, then any change in topology will impact the applications using those queues. Instead, name the queues for the functions they represent.
Go left-to-right, most generic to most specific
Object names, especially queues, should be constructed hierarchically beginning with the most generic qualifier and proceeding to the most specific qualifier. For example, many shops use APP.FUNC.SUBFUNC.VER where APP is the ID of the owning application, then one or more nodes with the function and subfunction. Many shops add a version qualifier on the end so that new versions of a service can migrate their clients on separate schedules rather than changing the service on the existing queue and making all clients change at the same time.
The thing that reads the messages owns the queue
If I have a service endpoint represented by a queue then there is a many-to-one relationship between the things that might call the service to the thing that provides the service. The queue is associated with the service and the thing providing that service. Clients are more or less anonymous. Therefore, if any of the stakeholder applications can be said to "own" the queue, it is the service provider app that consumes messages form it.
The thing that publishes the message owns the topic. Sort of.
The relationship is not as straightforward with topics. Here it is the consumers of messages that are usually anonymous. In that sense if the topic name reflects any application, it is most likely the publisher. However, even publishers can be anonymous, or at least there may many of them and not all publishing at the same time. With topics it makes much more sense that the topic tree nodes are structured for the hierarchy of data or functionality they represent. These names tend to match the names of the publishing apps so sometimes the publisher "owns" the topic as much out of coincidence as anything else.
Put positional qualifiers on the left
Where names have variable numbers of qualifiers, put the positional ones on the left where scripts and automation can parse them. Some shops where both the beginning and ending qualifiers are positional deal with this by using underscores for separators in the variable section of the name like APP.FUNC.SUBFUNC1_SUBFUNC2.VER. Scripts and authorizations then always see a fixed number of nodes in the name but this approach can be brittle if somebody forgets and makes a name with an extra node or two.
Further reading
This sums up most of the general rules but some of the philosophy behind them has been captured in the Mission:Messaging column. In particular:
Embracing cultural change in the WebSphere MQ community
Migration, failover, and scaling in a WebSphere MQ cluster
The article Planning for SSL on the WebSphere MQ network discusses distinguished name standards as they apply to WMQ
In WebSphere MQ 6, I want to script the creation of new queues. However the queues may already exist, and I need the script to be idempotent.
I can create queues using the commands documented here. For example:
DEFINE QREMOTE(%s) RNAME(%s) RQMNAME(%s) XMITQ(%s) DEFPSIST(YES) REPLACE
or
DEFINE QLOCAL(%s) DESCR(%s) DEFPSIST(YES) REPLACE
The REPLACE keyword ensures that creation does not fail if the queue already exists.
I've tested this with an existing, non-empty queue and it seems that no messages were lost. However this is not proof enough. I need to be certain that no messages will ever be lost or corrupted if I run a DEFINE Q... REPLACE command against an existing queue. The existing queue might even be participating in transactions at the time.
Can anyone confirm or deny this behaviour?
A DEFINE command with REPLACE fails if the object is open. Therefore you cannot redefine a queue with pending transactions. The manual states that all messages in the queue are retained during a DEFINE with REPLACE, and this implies no loss of message integrity. You can ALTER a queue with FORCE option to change a queue that is currently open as described here. That too retains messages in the queue without loss of integrity.
The DEFINE command will not affect the messages in a queue. The only effects you might notice are, for example, if you change the queue from FIFO to PRIORITY or vice versa. This only changes the indexing and ordering for new messages in the queue and does not affect existing messages. Similarly, changing attributes of the queue that affect handles only take effect the next time the queue is opened. An example of that is changing BIND(ONOPEN) to BIND(NOTFIXED).
One of the things that I have been recommending for a while for WMQ clusters is to split the queue definition up into build-time and run-time attributes. For example:
DEFINE QLOCAL (APP.FUNCTION.SUBFUNCTION.QA) +
GET(DISABLED) +
PUT(DISABLED) +
NOTRIGGER +
NOREPLACE
ALTER QLOCAL (APP.FUNCTION.SUBFUNCTION.QA) +
DESCR('APP service queue for QA') +
DEFPSIST(NO) +
BOTHRESH(5) +
BOQNAME('APP.FUNCTION.BACKOUT.QA') +
CLUSTER('DIV_QA') +
CLUSNL(' ') +
DEFBIND(NOTFIXED)
In this case the GET, PUT and TRIGGER attributes are considered run-time and are only set when the queue is first defined. This allows you to define a new queue in the cluster and have it be disabled until you are ready to turn on the app. In subsequent runs of the script, these attributes are never changed because the statement uses NOREPLACE. So once you enable GET and PUT on the queue these attributes (and the function of the app) are never disturbed by subsequent script runs.
The ALTER then handles all the attributes that are considered build-time. For example, if you change the description, you want it picked up in the next script run. Because we defined the queue in the previous step (or that step failed because the queue exists), we know the ALTER will work.
Whether any attribute such as the cluster membership is build-time or run-time is up to you to decide. This is just an example born from many cases where administrators inadvertently broke something by re-running the MQSC script.
But to answer your question a bit more on point, the things that break are because someone reset a run-time attribute such as GET(DISABLED) (which can cause an in-flight transaction to be backed out if the app tries to perform a GET on that queue after gets are disabled) and not because the change caused an integrity failure of the queue, a message or a transaction.