I have to check the IBM MQ queue manager status before opening a queue.
I have to create requestor app by checking that the QMgr is active or not then call put msg or get message from MQ
Is it possible to check the status,
please share some code snippets.
Thanks
You should NEVER have to check the QMgr before opening a queue. As I responded to a similar question today, the design proposed is a very, VERY bad design. The effect is to turn async messaging back into synchronous messaging. This couples message producers to consumers, introduces location and resolution dependencies, breaks clustering, defeats WMQ's load distribution and balancing, embeds network topology into the application, and makes the whole system brittle. Please do not blame WMQ for not working correctly after intentionally defeating all its best features except the actual queue/dequeue operations.
If your requestor app is checking that the QMgr is active, you are much better off using a multi-instance connection name and a layer of two or more functionally equivalent QMgrs that can access the cluster. So long as one of the QMgrs is up, the app will cycle between them until it finds one at which to connect.
If your responder app is checking that the QMgr is active, you are much better off just attempting to connect. Responder apps should never fail over to a different QMgr since doing so breaks transactionality and may leave queues unserviced. Instead just ensure that each queue has at least two input handles from local responder apps that do not fail over across QMgrs. (It is OK if the QMgr itself fails over using hardware clustering or multi-instance QMgr though).
If the intent is to check that there's an open input handle on the queue before putting messages there a better design is to have the requesting app not care to which queue instance the messages are routed and instead use the instrumentation built into WMQ to either restart responder apps that lose their input handle, or to disable the queue when nothing's listening.
Related
I have a ZMQ_PULL/ZMQ_PUSH socket connection.
I have multiple ZMQ_PUSH connections pushing to a single ZMQ_PULL connection.
ZMQ_PUSH connection 1----->
ZMQ_PUSH connection 2-----> ZMQ_PULL
ZMQ_PUSH connection N----->
I do not need every message, I just need the latest message that was sent. I am doing some inference on the back end and am streaming the results to the ZMQ_PULL socket.
I have set the ZMQ_PULL socket to Conflate=true
"If set, a socket shall keep only one message in its inbound/outbound queue, this message being the last message received/the last message to be sent. Ignores ZMQ_RCVHWM and ZMQ_SNDHWM options."
But after testing I realize I actually need the last message of each connection, not just the last message. So, if 3 connections, it grabs in a round robin from each connection, so I constantly have the latest from each connection.
Is there an option that is like Conflate, but instead of for all messages, it is for each connection?
Docs: http://api.zeromq.org/4-0:zmq-setsockopt
Is there an option that is like Conflate, but instead of for all messages, it is for each connection?
No.
The documentation you cite explains that 0MQ does not currently
offer direct support for such a single-socket use case.
You could certainly code it up and submit an upstream PR
so that future revs of 0MQ offer such functionality.
Given that you'll need app-level support to make
this work with 0MQ 4.3, simplest approach would
be to maintain N ZMQ_PULL sockets with ZMQ_CONFLATE
set, as you're already aware.
An alternate approach would be to assign a dedicated
thread or process to keep draining the existing muxed
socket, and update a shared memory data structure
that interested clients could consult.
The idea is to burn a core on keeping the queue
mostly empty, while doing no processing,
just focusing on communications.
Then other cores can examine "most recent message"
and each one then embarks on some expensive processing,
while another core continues to keep the queue drained.
This is essentially offering the 0MQ service proposed
above but at a different place in the stack,
up a level, within your application.
To do this in a distributed way,
the "queue draining service" would need to
know about idle workers.
That is, a worker could publish a brief
"I just completed an expensive task" message,
which would trigger the drainer to post
a fresh work item, never using shared memory at all.
This lets the drainer worry about eliding dup messages
that arrived when no one was available to immediately
start work on them, which have been superseded by a
more recent message.
I am wondering if MQ can be used as a state cache for monitoring? And is this a good idea or not?
In theory you can have many sources (monitoring agents) that detect problem states and distribute them to subscribers via an MQ system such as RabbitMQ. But has anyone heard of using MQ systems to cache the state, so when clients initialize, they read from the state queue before subscribing to new state messages? Is that a bad way to use MQ?
So to recap, a monitor would read current state from a state queue then setup a subscription queue to receive any new updates. And the state queue would be maintained by removing any alerts that are no longer valid by the monitoring agents that put the alert there to begin with.
Advantage would be decentralized notification and theoretically very salable by adding more mq systems to relay events.
I have a use case for Rabbit MQ that holds the last valid status of a system. When a new client of that system connects it receives the current status.
It is so simple to do!
You must use the Last Value Cache custom exchange https://github.com/simonmacmullen/rabbitmq-lvc-plugin
Once installed you send all your status messages to that exchange. Each client that needs the status information will create a queue that will have the most recent status delivered to that queue on instantiation. After that it will continue to receive status updates.
IBM MQ FTE uses such way for storing logs.
I think it is good idea, if you can prevent destination queue from overflow, because IBM MQ for example remove overdue messages only during GET call.
I am working with someone who is trying to achieve a load-balancing behavior using JMS Queues with IBM Websphere MQ. As such, they have multiple Camel JMS consumers configured to read from the same Queue. Despite that this behavior is undefined according to the JMS spec (last time I looked anyway), they expect a sort of round-robin / load-balancing behavior. And, while the spec leaves this undefined, I'm led to believe that the normal behavior of Websphere MQ is to deliver the message to only one of the consumers, and that it may do some type of load-balancing. See here, for example: When multi MessageConsumer connect to same queue(Websphere MQ),how to load balance message-consumer?
But in this particular case, it appears that both consumers are receiving the same message.
Can anyone who is more of an expert with Websphere MQ shed any light on this? Is there any situation where this behavior is expected? Is there any configuration change that can alleviate this?
I'm leaning towards telling everyone here to use the native Websphere MQ clustering facility and go away from having multiple consumers pointing at the same Queue, but that will be a big change for them, so I'd love to discover a way to make this work.
Not that I'm a fan of relying on anything that's undefined, but if they're willing to rely on IBM specific behavior, I'll leave that up to them.
The only way for them to both receive the same messages are:
There are multiple copies of the message.
The apps are browsing the message without a lock, then circling back to delete it.
The apps are backing out a transaction and making the message available again.
The connection is severed before the app acknowledges the message.
Having multiple apps compete for messages in a queue is a recommended practice. If one app goes down the queue is still served. In a cluster this is crucial because the cluster will continue to direct messages to the un-served queue instance until it fills up.
If it's a Dev system, install SupportPac MA0W and tell it to trace just that one queue and you will be able to see exactly what is happening.
See the JMS spec in section 4.4. The provider must never deliver a second copy of an acknowledged message. Exception is made for session handling in 4.4.13 which I cover in #4 above. That's pretty unambiguous and part of the official spec so not an IBM-specific behavior.
I’m writing a server/client game, a typical scenario looks like this: one client (clientA) send a message to the server, there is a MessageDrivenBean in server to handle such messages. After the MDB finished its job, it sends the result message back to another client (clientB).
In my opinion I only need two queues for such communication, one for input the other for output. Creating new queue for each connection is not a good idea, right?
The Input queue is relative clear, if more clients are sending message at the same time, the messages are just waiting in the queue, while there are more MDB instances in server, that should not a big performance issue.
But on the other side I am not quite clear about the output queue, should I use a topic instead of a queue? Every client is listening the output queue, one of them gets the new message and checks the property to determine if the message is to it, if not, it rollback the transaction, the message goes back to queue and be ready for other client … It should work but must be very slow. If I use topic instead, every client gets a copy of the message, if it’s not to it, just ignores the message. It should be better, right?
I’m new about message system. Is there any suggestion about my implementation? Thanks!
To begin with, choosing JMS as a gaming platform is, well, unusual — businesses use JMS brokers for delivery reliability and transaction support. Do you really need this heavy lifiting in a game? Shouldn't you resort to your own HTTP-based protocol, for example?
That said, two queues are a standard pattern for point-to-point communication. Creating a queue for a new connection is definitely not OK — message-driven beans are attached to queues at deployment time, so you won't be able to respond to queue creation events. Besides, queues are not meant to be created and destroyed in short cycles, they're rather designed to be long-living entities. If you need to deliver a message to one precise client, have the client listen on the server response queue with a message selector set to filter only the messages intended for this client (see javax.jms.Message API).
With topics it's exactly as you noted — each connected client will get a copy of the message — so again, it's not a good pattern to send to n clients a message that has to be discarded by n-1 clients.
MaDa;
You could stick one output queue (or topic) and simply tag the message with a header that identifies the intended client. Then, clients can listen on the queue/topic using a selector. Hopefully your JMS implementation has efficient server-side listener evaluation.
I am wondering about the reliabilty of message delivery in messaging systems such as WebsphereMQ or ActiveMQ (used via JMS). As far as I know messages can be buffered if the recepient is unavailable and will be delivered later.
Now I am wondering what happens if the sender temporarily cannot reach the network. Is there some kind of local buffering which will send the messages later? I assume this depends on where the message broker is running. Are there local brokers on all machines or just a central one?
To pinpoint my question: Is a messaging system the right choice if I need to ensure, that messages are received eventually, even in the face of temporary network failure? Is there a certain setup required to achieve this reliabilty?
Any pointers to relevant documentation would be appreciated.
The common solution is called "store and forward". In such systems, once you've handed off the message to the local message agent it becomes their responsibility. This agent might not be a full broker. If the messaging system has basic delivery guarantees, the local agent will still need persistent buffering of messages until they're handed off to a real broker.
If you really can't afford to lose messages I'd recommend implementing a reliable messaging pattern at the endpoints if you can, i.e. the sender re-sends if no acknowledgement is received within a certain time period and the receiver has duplicate detection to cope with receiving the same message more than once.
Guaranteed delivery comes with a performance overhead and usually doesn't give any guarantee as to how long your message might take to get there.