Triggering for events - ibm-mq

I have a queue definition like below:
DEFINE QLOCAL(TRIG.QLOCAL) +
DESCR('Example Queue for Triggering') +
DEFPRTY(0) +
DEFSOPT(SHARED) +
GET(ENABLED) +
MAXDEPTH(5000) +
MAXMSGL(4194304) +
MSGDLVSQ(PRIORITY) +
PUT(ENABLED) +
QDEPTHHI(80) +
QDPHIEV(ENABLED)+
RETINTVL(999999999) +
TRIGTYPE(EVERY) +
PROCESS(TRIG.PROCESS) +
INITQ(TRIG.INITQ) +
USAGE(NORMAL) +
REPLACE
I have defined the process like below:
DEFINE PROCESS(TRIG.PROCESS) APPLTYPE(UNIX) +
APPLICID(/appn/sy31/QdepthHiAlert.sh) +
ENVRDATA(' ') +
USERDATA(' ')
DESCR('PROCESS FOR TESTING QDEPTH HIGH EVENT') +
REPLACE
I have a trigger monitor running as service like below:
SERVICE(TRIGGER_MONITOR) STATUS(RUNNING)
PID(49610840) SERVTYPE(SERVER)
CONTROL(QMGR) STARTCMD(/usr/bin/runmqtrm)
STARTARG(-m PACOHB20 -q SYSTEM.DEFAULT.INITIATION.QUEUE)
Here are my questions:
I thought, all the trigger messages will be processed by trigger monitor script. If we don't configure it on the INITQ, process which is associated with the queue will not run. Is that correct?
If Yes, Our trigger monitor is not running on the queue's INITQ(TRIG.INITQ). Must we run the trigger monitor on the INITQ too?
When we are configuring the transmission queues for triggering, we have defined the trigger data and process definitions. Though we didn't configure the trigger monitor on the initiation queue, the channel is started since we have runmqchi on the system.channel.initiation.queue. So runmqtrm and runmqchi function similarly?
In here, we have the triggering for EVERY message and queue depth high event. In both the cases, trigger message will be placed to the same INITQ. So, how do we know what kind of alert we are receiving?

OK, let's take these one at a time.
I thought, all the trigger messages will be processed by trigger
monitor script. If we don't configure it on the INITQ, process which
is associated with the queue will not run. Is that correct?
If I understand what you are asking, it is whether the process is fired if there's nothing listening on the initiation queue. That is correct. The application queue must have TRIGGER set and specify the INITQ value. The specified initiation queue must have an open input handle in order for MQ to format and place the trigger message.
If Yes, Our trigger monitor is not running on the INITQ : TRIG.INITQ.
do we must run the trigger monitoring on the INITQ too?
Yes. The queue's INITQ is where the QMgr will place any trigger messages. The QMgr will not place the trigger messages unless there is an open input handle on that initiation queue, and that handle had better be from a trigger monitor it it won't work.
When we are configuring the transmission queues, for triggering, we
have defined the trigger data and process definitions. Though we
didn't configure the trigger monitor on the initiation queue, channel
got run since we have runmqchi on the INITQ. So runmqtrm and runmqchi
function similarly?
The channel initiator is a LOT more forgiving of sloppy configuration than is the trigger monitor. It's easy to work out from the channel definition which transmission queue it uses. So MQ figures that if the administrator defined a queue of type XMITQ set it to TRIGGER then the intent must be to start a channel. It then works backward from the channel defs to discover which channel is associated with that queue.
But there are no such safe assumptions with runmqtrm. You must connect the dots from the queue's INITQ and PROCESS attributes to the trigger monitor listening on the specified INITQ and the associated process starting, correctly reading the trigger message, and then processing the queue as intended.
In here, we have the triggering for EVERY message and queue depth high
event. In both the cases, trigger message will be placed to the same
INITQ. So, how do we know what kind of alert we are receiving?
These are two different things. Only one type of triggering can be specified on a queue and it is one of FIRST, DEPTH, or EVERY. You can also specify that the QMgr will emit an event message to an event queue (not an initiation queue) when the queue depth exceeds some threshold.
The two things are related but completely different types of instrumentation. The triggering instrumentation is designed to start processes under certain conditions. The queue depth events are designed to feed real-time operational information to monitoring agents listening on the event queues.
For more about triggering, including a mini-tutorial built around a useful application and sample scripts to implement it, please see Mission:Messaging: Easing administration and debugging with circular queues

Related

omnetpp: Avoid "sending while transmitting" error using sendDelayed()

I am implementing a PON in OMNet++ and I am trying to avoid the runtime error that occurs when transmitting at the time another transmission is ongoing. The only way to avoid this is by using sendDelayed() (or scheduleAt() + send() but I don't prefer that way).
Even though I have used sendDelayed() I am still getting this runtime error. My question is: when exactly the kernel checks if the channel is free if I'm using sendDelayed(msg, startTime, out)? It checks at simTime() + startTime or at simTime()?
I read the Simulation Manual but it is not clear about that case I'm asking.
The business of the channel is checked only when you schedule the message (i.e. at simTime() as you asked). At this point it is checked whether the message is scheduled to be delivered at a time after channel->getTransmissionFinishTime() i.e. you can query when the currently ongoing transmission will finish and you must schedule the message for that time or later). But please be aware that this check is just for catching the most common errors. If you schedule for example TWO messages for the same time using sendDelayed() the kernel will check only that is starts after the currently transmitted message id finished, but will NOT detect that you have scheduled two or more messages for the same time after that point in time.
Generally when you transmit over a channel which has a datarate set to a non-zero time (i.e. it takes time to transmit the message), you always have to take care what happens when the messages are coming faster than the rate of the channel. In this case you should either throw away the message or you should queue it. If you queue it, then you obviously have to put it into a data structure (queue) and then schedule a self timer to be executed at the time when the message channel gets free (and the message is delivered at the other side). At this point, you should get the next packet from the queue, put it on the channel and schedule a next self timer for the time when this message is delivered.
For this reason, using just sendDelayed() is NOT the correct solution because you are just trying to implicitly implement a queue whit postponing the message. The problem is in this case, that once you schedule a message with sendDelay(), what delay will you use if an other packet arrives, and then another is a short timeframe? As you can see, you are implicitly creating a queue here by postponing the event. You are just using the simulation's main event queue to store the packets but it is much more convoluted an error prone.
Long story short, create a queue and schedule self event to manage the queue content properly or drop the packets if that suits your need.

Azure Queues - Functions - Message Visibility - Workers?

I have some questions regarding the capabilities regarding Azure Queues, Functions, and Workers. I'm not really sure how this works.
Scenario:
q-notifications is an queue in an Azure storage account.
f-process-notification is a function in Azure that is bound to q-notifications. Its job is to get the first message on the queue and process it.
In theory when a message is added to q-notifications, the function f-process-notification should be called.
Questions:
Does the triggered function replace the need to have workers? In other words, is f-process-notification called each time a message is placed in the queue.
Suppose I place a message on the queue that has a visibility timeout of 5 minutes. Basically I am queueing the message but it shouldn't be acted on until 5 minutes pass. Does the queue trigger f-process-notification immediately when the message is placed on the queue, or will it only trigger f-process-notification when the message becomes visible, i.e. 5 minutes after it is placed on the queue?
In Azure Functions, each Function App instance running your queue triggered function will have its own listener for the target queue. It monitors the queue for new work using an exponential backoff strategy. When new items are added to the queue the listener will pull multiple items off of the queue (batching behavior is configurable) and dispatch then in parallel to your function. If your function is successful, the message is deleted, otherwise it will remain on the queue to be reprocessed. To answer your question - yes we respect any visibility timeout you specify. If a message is added with a 5 minute timeout it will only be processed after that.
Regarding scale out - when N instances of your Function App are running they will all cooperate in processing the queue. Each queue listener will independently pull batches of messages off the queue to process. In effect, the work will be load balanced across the N instances. Exactly what you want :) Azure Functions is implementing all the complexities of the multiple consumer/worker pattern for you behind the scenes.
I typically use a listener logic as opposed to triggers. The consumer(s) are constantly monitoring the queue for messages. If you have multiple consumers, for example 5 instances of the consuming code in different Azure worker roles processing the same bus/queue, the first consumer to get the message wins (they are "competing"). This provides a scaling scenario common in a SOA architecture..
This article describes some of the ways to defer processing.
http://markheath.net/post/defer-processing-azure-service-bus-message
good luck!

Multiple consumers working as single consumer with Masstransit

My system has a constrain for specific consumer that messages should be handled in order, one after the other. To implement that we set the concurrency to 1.
Now we want to scale out and add more instance of this consumer.
To keep the order I want to use distributed lock manager like 'RedLock'. It can tell each consumer if it is OK to fetch the next message.
I work with RabbitMq and my question is if there is kind of observer event that comes before getting messages from the queue. In other words I need a way to enable/disable the operation of polling messages from the queue.

read messages from JMS MQ or In-Memory Message store by count

I want to read messages from JMS MQ or In-memory message store based on count.
Like I want to start reading the messages when the message count is 10, until that i want the message processor to be idle.
I want this to be done using WSO2 ESB.
Can someone please help me?
Thanks.
I'm not familiar with wso2, but from an MQ perspective, the way to do this would be to trigger the application to run once there are 10 messages on the queue. There are trigger settings for this, specifically TRIGTYPE(DEPTH).
To expand on Morag's answer, I doubt that WS02 has built-in triggers that would monitor the queue for depth before reading messages. I suspect it just listens on a queue and processes messages as they arrive. I also doubt that you can use MQ's triggering mechanism to directly execute the flow conveniently based on depth. So although triggering is a great answer, you need a bit of glue code to make that work.
Conveniently, there's a tutorial that provides almost all the information necessary to do this. Please see Mission:Messaging: Easing administration and debugging with circular queues for details. That article has the scripts necessary to make the Q program work with MQ triggering. You just need to make a couple changes:
Instead of sending a command to Q to delete messages, send a command to move them.
Ditch the math that calculates how many messages to delete and either move them in batches of 10, or else move all messages until the queue drains. In the latter case, make sure to tell Q to wait for any stragglers.
Here's what it looks like when completed: The incoming messages land on some queue other than the WS02 input queue. That queue is triggered based on depth so that the Q program (SupportPac MA01) copies the messages to the real WS02 input queue. After the messages are copied, the glue code resets the trigger. This continues until there are less than 10 messages on the queue, at which time the cycle idles.
I got it by pushing the message to db and get as per the count required as in this answer of me take a look at my answer

Will MQSC define queue command ever delete or corrupt messages?

In WebSphere MQ 6, I want to script the creation of new queues. However the queues may already exist, and I need the script to be idempotent.
I can create queues using the commands documented here. For example:
DEFINE QREMOTE(%s) RNAME(%s) RQMNAME(%s) XMITQ(%s) DEFPSIST(YES) REPLACE
or
DEFINE QLOCAL(%s) DESCR(%s) DEFPSIST(YES) REPLACE
The REPLACE keyword ensures that creation does not fail if the queue already exists.
I've tested this with an existing, non-empty queue and it seems that no messages were lost. However this is not proof enough. I need to be certain that no messages will ever be lost or corrupted if I run a DEFINE Q... REPLACE command against an existing queue. The existing queue might even be participating in transactions at the time.
Can anyone confirm or deny this behaviour?
A DEFINE command with REPLACE fails if the object is open. Therefore you cannot redefine a queue with pending transactions. The manual states that all messages in the queue are retained during a DEFINE with REPLACE, and this implies no loss of message integrity. You can ALTER a queue with FORCE option to change a queue that is currently open as described here. That too retains messages in the queue without loss of integrity.
The DEFINE command will not affect the messages in a queue. The only effects you might notice are, for example, if you change the queue from FIFO to PRIORITY or vice versa. This only changes the indexing and ordering for new messages in the queue and does not affect existing messages. Similarly, changing attributes of the queue that affect handles only take effect the next time the queue is opened. An example of that is changing BIND(ONOPEN) to BIND(NOTFIXED).
One of the things that I have been recommending for a while for WMQ clusters is to split the queue definition up into build-time and run-time attributes. For example:
DEFINE QLOCAL (APP.FUNCTION.SUBFUNCTION.QA) +
GET(DISABLED) +
PUT(DISABLED) +
NOTRIGGER +
NOREPLACE
ALTER QLOCAL (APP.FUNCTION.SUBFUNCTION.QA) +
DESCR('APP service queue for QA') +
DEFPSIST(NO) +
BOTHRESH(5) +
BOQNAME('APP.FUNCTION.BACKOUT.QA') +
CLUSTER('DIV_QA') +
CLUSNL(' ') +
DEFBIND(NOTFIXED)
In this case the GET, PUT and TRIGGER attributes are considered run-time and are only set when the queue is first defined. This allows you to define a new queue in the cluster and have it be disabled until you are ready to turn on the app. In subsequent runs of the script, these attributes are never changed because the statement uses NOREPLACE. So once you enable GET and PUT on the queue these attributes (and the function of the app) are never disturbed by subsequent script runs.
The ALTER then handles all the attributes that are considered build-time. For example, if you change the description, you want it picked up in the next script run. Because we defined the queue in the previous step (or that step failed because the queue exists), we know the ALTER will work.
Whether any attribute such as the cluster membership is build-time or run-time is up to you to decide. This is just an example born from many cases where administrators inadvertently broke something by re-running the MQSC script.
But to answer your question a bit more on point, the things that break are because someone reset a run-time attribute such as GET(DISABLED) (which can cause an in-flight transaction to be backed out if the app tries to perform a GET on that queue after gets are disabled) and not because the change caused an integrity failure of the queue, a message or a transaction.

Resources