I am looking to implement an automated DLQ rule file to remove queue full condition in my dev/uat environments. The issue I have is that I want to exclude messages that could be bound for full transmission queues .This is for hopping across multiple queue managers.
I had initially thought the below would work:
INPUTQM(qmgrname) WAIT(YES)
REASON(MQRC_Q_FULL) DESTQM(local qmgr name)ACTION(DISCARD) RETRY(5)
However on testing, when the transmission queue is full, it does not put the transmission queue header on, the DESTQM name does not change to next queue manager intended. The message falls to DLQ with remote queue name and 2053 exception, with DESTQM still has local qmgr.
Wondering if anyone has any ideas on a rule file that could work here?
Filtering by the DESTQ might work, if the names of the queues allow for a pattern or more patterns (then you'd need more rules) that only match the non transmission queues:
Wildcard characters are supported. You can use the question mark (?)
instead of any single character, except a trailing blank; you can use
the asterisk () instead of zero or more adjacent characters. The
asterisk () and the question mark (?) are always interpreted as
wildcard characters in parameter values.
http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.adm.doc/q005690_.htm?lang=en
Related
ActiveMQ 5.15.13
Context: I have a single queue with multiple Consumers. I want to stop some consumers from processing certain messages. This has to be dynamic, I don't want to create separate queues for this. This works without any problems. e.g. Consumer1 ignores Stocks -> Consumer1 can process all invoices and Consumer2 can process all Stocks
But if there is a large number of messages already in the Queue (of one type, e.g. stocks) and I send a message of another type (e.g. invoices), Consumer1 won't process the message of type invoices. It will instead be idle until Consumer2 has processed all Stocks messages. It does not happen every time, but quite often.
Is there any option to change the order of the new messages coming into the queue, such that an idle consumer with matching selector picks up the new message?
Things I've already tried:
using a PendingMessageLimitStrategy -> it seems like it does not work for queues
increasing the maxPageSize and maxBrowsePageSize in the hope that once all Messages are in RAM, the Consumers will search for their messages.
Exclusive Consumers aren't an option since I want to be able to use more than one Consumer per message type.
Im pretty sure that there is some configuration which allows this type of usage. I'm aware that there are better solutions for this issue, but sadly I can't use them easily due to other constraints.
Thanks a lot in advance!
EDIT: I noticed that when I'm refreshing on the localhost queue browser, the stuck messages get executed immediately. It seems like this action performs some sort of queue refresh where the messages get filtered based on their selector again. So I just need this action whenever a new message enters the queue...
This is a 'window' problem where the next set of 'stocks' data needs to be processed before the 'invoicing' data can be processed.
The gotcha with window problems like this is that you need to account for the fact that some messages may never come through, or a consumer may never come back online either. Also, eventually you will be asked 'how many invoices or stocks are left to be processed'-- aka observability.
ActiveMQ has you covered-- check out wild-card destinations and consumers.
Produce 'stocks' to:
queue://data.stocks.input
Produce 'invoices' to:
queue://data.invoices.input
You then setup consumes to connect:
queue://data.*.input
note: the wildard '*'.
ActiveMQ will match queues based on the wildcard pattern, and then process data accordingly. As a bonus, you can still use a selector.
I have a JMS Queue which will be flooded with event messages from another system. I need to write a program/component which reads the messages and compares the event data against set of rules. Multiple rules can match a message from JMS. If they match, the system should be able to send notifications.
Example:
JMS Queue will be flooded with a strings.
Users of this system are interested in specific type of strings.
User1 will create a rule - "String that contains no special chars"
User2 will create a rule - "String with only capital letters"
and so on...
I should design a system which consumes the strings from the JMS, check the string against all rules, and alert the respective user that a string matching his rule has arrived.
Some considerations :
Hundreds of users.
Each user can create hundreds of rules.
Totally thousands of rules need to be run for a single string from JMS.
So, the comparison needs to be extremely fast.
Also, the users are allowed to create more rules while the system is running.
Which framework will help me achieve this?
I am using camel to integrate with ActiveMQ JMS. I am receiving prices for products on this queue. I am using JMSXGroupID on productId to ensure ordering across a productId. Now if I fail to process this message I move it to a DeadLetterQueue. This could be because of a connection error on a dependent service or because of error with the message itself.
In case of the former I would have to manually remove it from the DLQ and put it back into the JMS queue.
Now the problem is that I dont know if any other message on that groupId has been received and processed or not. And hence unsidelining from DLQ will disrupt the order. On the other hand if I dont unsideline it and no other message has been received the product Id will not get the correct price.
1 solution that I have in mind is to use a fast key-value store(Redis) to store the last messageId or JMSTimestamp against a productId(message group). This is updated everytime I dequeue a message. Any other solution for this?
Relying on message order in JMS is a risky business - at best.
The best thing to do is to make the receiver handle messages out of sequence as a special case (but may take advantage message order during normal operation).
You may also want to distinguish between two errors: posion messages and temporary connection problems, maybe even use two different error queues for them. In the case of a posion message (invalid payload etc.) then there is nothing you can really do about it except starting a bug investigation. In such cases, you can probably send along "something else", such as dummy message to not interfere with order.
For the issues with connection problems, you can have another strategy - ActiveMQ Redelivery Policies. If there is network trouble, it's usually no use in trying to process the second message until the first has been handled. A Redelivery Policy ensures that (given you have a single consumer, that is). There is another question at SO where the poster actually has a solution to your problem and wants to avoid it. Read it. :)
What are the recommended guidelines for WebSphere MQ naming conventions for queue managers, queues (local, remote, transmit, dead letter queues...), channels etc. I found one at IBM's developerWorks but looking to see if there is anything else comprehensive out there. Thanks.
This sounds like a good topic for a new Mission:Messaging column but I'll write up the condensed version here. I'll preface my answer by noting that many of my recommendations are contrary to those you might find elsewhere. In some cases this is because the way MQ is commonly used have changed over the years. In other cases it is because the conventional wisdom never worked well to begin with. (Cluster channels named TO.QMGR for example.) In all cases, I prefer conventions which apply to the broadest number of situations. That means it is usually possible to find exceptions to these rules for specific cases but they are broadly applicable nonetheless.
Some general rules
The following apply to all object types.
Use the dot character . as a separator.
Authorization rules parse names using dot characters as separators. For example, the queue name MY.EXAMPLE.QUEUE.NAME would match rules like MY.*.*.* or MY.** but not MY.* because the dots signify name node separator characters. Do yourself a favor and use dots rather than underscores as your naming node separators consistently.
Use machine-parsable names.
When you have 5 queue managers and a few hundred objects, you can easily get by doing all your management manually with WMQ Explorer or runmqsc. However, there comes a point where consistency, reliability, repeatability and efficiency demand that you script up some of your routine operations or employ instrumentation to respond to network events. More than anything else this means eliminating ambiguity in names.
For example, if you create a naming convention that channel names must look like SRCQMGR.DESTQMGR then it is possible for a script to read a RCVR or SDR channel name and derive the names of the two queue managers it connects. However, what does the script do with a channel name like GA.PAYROLL.OPS? Is it the GA.PAYROLL queue manager connecting to the OPS queue manager? Or is it the GA queue manager connecting to the PAYROLL.OPS queue manager? A human might be able to tell instantly based on context but scripts are notorious for doing what you tell them rather than what you intended. Similar situations arise when queue names have position-dependent qualifiers at both the beginning and the end of the name and a variable number of nodes.
Stick with UPPERCASE names.
This is for compatibility across all platforms, and in particular z/OS. Although it is true that more z/OS shops are using mixed case, it is also true that there are still a lot of systems out there that only accept UPPERCASE names. While it is easy to say "this doesn't apply to me" I have seen many cases where somebody had problems interfacing to a new business partner because of incompatible names. After all, the ability to interface to just about any platform out there is one of the main reasons for using WMQ in the first place.
Don't include attributes of the object in the name
In an SOA world, queues and topics are different types of destination and often are interchangeable. Something putting messages to what it thinks is a queue doesn't necessarily know (or care!) if they are actually going to a queue or a topic. A queue that has an application listening on it for messages may be fed by an administrative subscription that is actually roping in publications from one or more topics.
What we really care about is the nature of the messages - what function do they perform - not whether we are connecting to a local queue or an alias queue. So adding qualifiers like .QA, .QL, .TPC (for topic) and so forth doesn't make sense. Similarly, adding .RCVR onto a channel name sucks up 5 useful characters that could have been better used describing the QMgr name. Worse, these practices bake the topology into the object names, making the system both less flexible and more brittle.
Channel names
Point-to-Point Channel names
Use names like SRCQMGR.DESTQMGR for RCVR, RQSTR, SDR, and SVR channels. This is biased toward languages that read left-to-right because the intention is to describe the data flow from a QMgr to another QMgr
Cluster channels
Use names like CLUSNAME.QMNAME. The old wisdom said to use names like TO.QMNAME but if you ever implement overlapping clusters this causes the same channel to be used for multiple clusters. That's bad because you can then never perform maintenance on one cluster without impacting the other. Using CLUSNAME.QMNAME insures that every QMgr has a dedicated CLUSRCVR channel for each cluster in which it participates.
Client Channels
The exception to the "don't include attributes of the object in the name" is arguably the SVRCONN channel. This is because channels are very much tied to the physical rather than the logical layer of the network. So putting the QMgr name in a SVRCONN channel name is generally OK. I don't object too strongly if people want to add .SVRCONN at the end, either.
The thing to remember about client channels is that if you use a Client Channel Definition Table (CCDT) then the unique index into that table is the channel name. That means you cannot have the same channel name on multiple QMgrs and still use a CCDT. Since the CCDT is one way to configure the SSL/TLS channel details, this is often not appreciated to its fullest until the "let's finally secure WMQ" project comes along. By using unique channel names for SVRCONN channels from the start, you can future-proof the network. Usually these names look like APP.QMNAME or to make it obvious you aren't dealing with a cluster or point-to-point channel, APP.QMNAME.SVRCONN or similar.
Queue manager names
No dots in QMgr names
One implication of the previous rules is that cluster and queue manager names must contain only one node and therefore should never contain a . character. This is because channel names are typically derived from the cluster and/or QMgr names. So in the example above, a RCVR channel name like GA_PAYROLL.OPS would tell both humans and scripts that the channel in question connects a QMgr named GA_PAYROLL to a QMgr named OPS.
Names of 9 chars or less
Channel names can be only 20 characters. Subtract one for the dot separator, divide by two and round down gets you to 9 characters max for queue managers. If there's a possibility that you may set up different channels for classes of service (for large vs small messages, for example, then drop back to 8 chars or less for QMgr names. This leads to QM1.QM2.A, QM1.QM2.B, etc.
QMgr names reflect the physical layer
In a service-oriented world, we really care about destination names like queues and topics. We care much less about queue manager names because these are just life support for queues and topics. Client apps don't care so much about which QMgr the connect to, so long as they can send requests and receive replies. WMQ very conveniently fills in the reply-to QMgr name on outbound requests so it is rare that an application needs to know about it.
On the other hand, the administrators need to know about the QMgr name. In the early days it was common to name QMgrs for the host server. Later it became the fashion to name them for the applications they hosted. Now in the SOA world, messaging is infrastructure and usually not associated with any single application so the pendulum has swung back. Give the QMgr a unique name that is meaningful to an administrator.
Never reuse QMgr names!
It is unfortunately very common to "move" a QMgr from one place to another or to have a primary and disaster recovery QMgr with the same name. This practice usually means some part of the application is dependent on the QMgr name and therefore it is "easier" to reuse the name. IBM introduced the QMID as a way to address some of the problems introduced by reusing QMgr names. The typical use case is that a node gets rebuilt and the QMgr once there is also rebuilt from scratch. The cluster knows it is a new QMgr because the QMID has changed, but the name used for routing and other operations remains the same.
Although this helped in that limited use case, it doesn't address the issues when both QMgrs with the same name are online at the same time. Nor does it address the problem that a reputable Certificate Authority won't issue multiple certs with the same Distinguished Name which forces reuse of the same certificate for multiple QMgrs.
Remember that QMgrs are just life support for queues and topics and ideally will be anonymous to the applications using them. Pick a naming convention that allows you to spin up new QMgrs with unique names by the hundreds or thousands, if necessary, so that you don't have to reuse QMgr names.
Other objects
Use intention-revealing names
Or to put it another way, name the object for what it does and not what it is. For example, if you were in the habit (as many people are) of including qualifiers like .QL for local queues and .QA for aliases, then any change in topology will impact the applications using those queues. Instead, name the queues for the functions they represent.
Go left-to-right, most generic to most specific
Object names, especially queues, should be constructed hierarchically beginning with the most generic qualifier and proceeding to the most specific qualifier. For example, many shops use APP.FUNC.SUBFUNC.VER where APP is the ID of the owning application, then one or more nodes with the function and subfunction. Many shops add a version qualifier on the end so that new versions of a service can migrate their clients on separate schedules rather than changing the service on the existing queue and making all clients change at the same time.
The thing that reads the messages owns the queue
If I have a service endpoint represented by a queue then there is a many-to-one relationship between the things that might call the service to the thing that provides the service. The queue is associated with the service and the thing providing that service. Clients are more or less anonymous. Therefore, if any of the stakeholder applications can be said to "own" the queue, it is the service provider app that consumes messages form it.
The thing that publishes the message owns the topic. Sort of.
The relationship is not as straightforward with topics. Here it is the consumers of messages that are usually anonymous. In that sense if the topic name reflects any application, it is most likely the publisher. However, even publishers can be anonymous, or at least there may many of them and not all publishing at the same time. With topics it makes much more sense that the topic tree nodes are structured for the hierarchy of data or functionality they represent. These names tend to match the names of the publishing apps so sometimes the publisher "owns" the topic as much out of coincidence as anything else.
Put positional qualifiers on the left
Where names have variable numbers of qualifiers, put the positional ones on the left where scripts and automation can parse them. Some shops where both the beginning and ending qualifiers are positional deal with this by using underscores for separators in the variable section of the name like APP.FUNC.SUBFUNC1_SUBFUNC2.VER. Scripts and authorizations then always see a fixed number of nodes in the name but this approach can be brittle if somebody forgets and makes a name with an extra node or two.
Further reading
This sums up most of the general rules but some of the philosophy behind them has been captured in the Mission:Messaging column. In particular:
Embracing cultural change in the WebSphere MQ community
Migration, failover, and scaling in a WebSphere MQ cluster
The article Planning for SSL on the WebSphere MQ network discusses distinguished name standards as they apply to WMQ
In WebSphere MQ 6, I want to script the creation of new queues. However the queues may already exist, and I need the script to be idempotent.
I can create queues using the commands documented here. For example:
DEFINE QREMOTE(%s) RNAME(%s) RQMNAME(%s) XMITQ(%s) DEFPSIST(YES) REPLACE
or
DEFINE QLOCAL(%s) DESCR(%s) DEFPSIST(YES) REPLACE
The REPLACE keyword ensures that creation does not fail if the queue already exists.
I've tested this with an existing, non-empty queue and it seems that no messages were lost. However this is not proof enough. I need to be certain that no messages will ever be lost or corrupted if I run a DEFINE Q... REPLACE command against an existing queue. The existing queue might even be participating in transactions at the time.
Can anyone confirm or deny this behaviour?
A DEFINE command with REPLACE fails if the object is open. Therefore you cannot redefine a queue with pending transactions. The manual states that all messages in the queue are retained during a DEFINE with REPLACE, and this implies no loss of message integrity. You can ALTER a queue with FORCE option to change a queue that is currently open as described here. That too retains messages in the queue without loss of integrity.
The DEFINE command will not affect the messages in a queue. The only effects you might notice are, for example, if you change the queue from FIFO to PRIORITY or vice versa. This only changes the indexing and ordering for new messages in the queue and does not affect existing messages. Similarly, changing attributes of the queue that affect handles only take effect the next time the queue is opened. An example of that is changing BIND(ONOPEN) to BIND(NOTFIXED).
One of the things that I have been recommending for a while for WMQ clusters is to split the queue definition up into build-time and run-time attributes. For example:
DEFINE QLOCAL (APP.FUNCTION.SUBFUNCTION.QA) +
GET(DISABLED) +
PUT(DISABLED) +
NOTRIGGER +
NOREPLACE
ALTER QLOCAL (APP.FUNCTION.SUBFUNCTION.QA) +
DESCR('APP service queue for QA') +
DEFPSIST(NO) +
BOTHRESH(5) +
BOQNAME('APP.FUNCTION.BACKOUT.QA') +
CLUSTER('DIV_QA') +
CLUSNL(' ') +
DEFBIND(NOTFIXED)
In this case the GET, PUT and TRIGGER attributes are considered run-time and are only set when the queue is first defined. This allows you to define a new queue in the cluster and have it be disabled until you are ready to turn on the app. In subsequent runs of the script, these attributes are never changed because the statement uses NOREPLACE. So once you enable GET and PUT on the queue these attributes (and the function of the app) are never disturbed by subsequent script runs.
The ALTER then handles all the attributes that are considered build-time. For example, if you change the description, you want it picked up in the next script run. Because we defined the queue in the previous step (or that step failed because the queue exists), we know the ALTER will work.
Whether any attribute such as the cluster membership is build-time or run-time is up to you to decide. This is just an example born from many cases where administrators inadvertently broke something by re-running the MQSC script.
But to answer your question a bit more on point, the things that break are because someone reset a run-time attribute such as GET(DISABLED) (which can cause an in-flight transaction to be backed out if the app tries to perform a GET on that queue after gets are disabled) and not because the change caused an integrity failure of the queue, a message or a transaction.