We have one event producer and two clients (two different applications). These two clients have different roles but must receive the same events (order does not matter) from the same queue.
And I faced that a message will be consumed by only one client. Is it possible to receive the same message on two clients?
The most simple and effective solution to address this requirement is to use a bridge.
Basically you would bridge the queue used by the producer to another queue and you would have one client using the main queue and the other using the bridged queue.
For details you can check EMS documentation :
https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-174DF38C-4FDA-445C-BF05-0C6E93B20189.html
Related
My experience with setting up Tibco infrastructure is minimal, so please excuse any misuse of terminology, and correct me where wrong.
I am a developer in an organization where I don't have access to how the backend is setup for Tibco. However we have bandwidth issues between our regional centers, which I believe is due to how it's setup.
We have a producer that sends a message to multiple "regional" brokers. However these won't always have a client who needs to subscribe to the messages.
I have 3 questions around this:
For destination bridges: https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-174DF38C-4FDA-445C-BF05-0C6E93B20189.html
Is a bridge what would normally be used, to have a producer send the same message to multiple brokers/destinations or is there something else?
It's not clear in the documentation, if a bridge exists to a destination where there is no client consuming a message, does the message still get sent to that destination? I.e., will this consume bandwidth even with no client wanting it?
If the above is true (and messages are only sent to destinations with a consumer), does this apply to both Topics and Message Selectors?
Is a bridge what would normally be used, to have a producer send the same message to multiple brokers/destinations or is there something else?
A bridge can be used to send messages from one destination to multiple destinations (queues or topic).
Alternatively Topics can be used to send a message to multiple consumer applications. Topics are not the best solution if a high level of integrity is needed(no message losses, queuing, etc).
It's not clear in the documentation, if a bridge exists to a destination where there is no client consuming a message, does the message still get sent to that destination? I.e., will this consume bandwidth even with no client wanting it?
If the bridge destination is a queue, messages will be put in the queue.
If the bridge destination is a Topic, messages will be distributed only if there are active consumers applications (or durable subscribers).
3 If the above is true (and messages are only sent to destinations with a consumer), does this apply to both Topics and Message Selectors?
This applies only to Topics (when there is no durable subscriber)
An alternative approach would be to use routing between EMS servers. In this approach Topics are send to remote EMS servers only when there is a consumer connected to the remote EMS server (or if there is a durable subscriber)
https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-FFAAE7C8-448F-4260-9E14-0ACA02F1ED5A.html
Let me start by describing the system. There are 2 applications, let's call them Client and Server. There are also 2 queues, request queue and reply queue. The Client publishes to the request queue, and the server listens for that request to process it. After the Server processes the message, it publishes it to the reply queue, which the Client is subscribed to. The Server application always publishes the reply to the predefined reply queue, not a queue that the Client application determines.
I cannot make updates to the Server application. I can only update the Client application. The queues are created and managed by the Server application.
I am trying to implement request/reply pattern from Client, such that the reply from the Server is synchronously returned. I am aware of the "sendAndReceive" approach with spring, and how it works with a temporary queue for reply purposes, and also with a fixed reply queue.
Spring AMQP - 3.1.9 Request/Reply Messaging
Here are the questions I have:
Can I utilize this approach with existing queues, which are managed and created by the Server application? If yes, please elaborate.
If my Client application is a scaled app (multiple instances of it are running at the same time), then how do I also implement it in such a way, that the wrong instance (one in which the request did not originate) does not read the reply from the queue?
Am I able to use the "Default" exchange to my advantage here, in addition to a routing key?
Thanks for your time and your responses.
Yes; simply use a Reply Listener Container wired into the RabbitTemplate.
IMPORTANT: the server must echo the correlationId message property set by the client, so that the reply can be correlated to the request in the client.
You can't. Unlike JMS, RabbitMQ has no notion of message selection; each consumer (in this case, reply container) needs its own queue. Otherwise, the instances will get random replies and it is possible (highly likely) that the reply will go to the wrong instance.
...it publishes it to the reply queue...
With RabbitMQ, publishers don't publish to queues, they publish to exchanges with a routing key. It is bad practice to tightly couple publishers to queues. If you can't change the server to publish the reply to an exchange, with a routing key that contains something from the request message (or use the replyTo property), you are out of luck.
Using the default exchange encourages the bad practice I mentioned in 2 (tightly coupling producers to queues). So, no, it doesn't help.
EDIT
If there's something in the reply that allows you to correlate it to a request; one possibility would be to add a delegating consumer on the server's reply queue. Receive the reply, perform the correlation, route the reply to the proper replyTo.
Can there more than one listener to a queue manager ? I have used one listener/queue manager combination so far and wonder if this possible. This is because we have 2 applications connecting to same queue manager and seems to have problem with that.
There are a couple meanings for the term listener in an MQ context. Let's see if we can clear up some confusion over the terminology and then answer the question as it relates to each.
As defined in the spec, a JMS listener is an object that implements a callback mechanism. It listens on destinations for messages and calls onMessage when they arrive. The destinations may be queues or topics hosted by any JMS-compliant transport provider.
In IBM MQ terms, a listener is a process (runmqlsr) that handles inbound connection requests on a server. Although these can handle a variety of protocols, in practice they are almost exclusively TCP listeners that bind a port (1414 by default) and negotiate connection requests on sockets.
TCP Ports
Tim's answer applies to the second of these contexts. MQ can listen for sockets on multiple ports and indeed it is quite common to do so. Each listener listens on one and only one port. It may listen on that port across all network interfaces or can be bound to a specific network interface. No two listeners can bind to the same combination of interface and port though.
In a B2B context the best practice is to run a dedicated listener for each external business partner to isolate each of their connections across dedicated access paths. Internally I usually recommend separate ports for QMgr-to-QMgr, app-to-QMgr and interactive user connections.
In that sense it is possible to run multiple listeners on a given QMgr. Each of those listeners can accept many connections. Their job is to negotiate the connection then hand the socket off to a Message Channel Agent which talks to the QMgr on behalf of the remotely connected client or QMgr.
JMS Listeners
Based on comments, Ulab refers to JMS listeners. These objects establish a connection to a queue manager and then wait in GET mode for new messages arriving on a destination. On arrival of a message, they call the onMessage method which is an asynchronous callback routine.
As to the question "can there more than one (JMS) listener to a queue manager?" the answer is definitely yes. A multi-threaded application can have multiple listeners connected, multiple application instances can connect at the same time, and many thousands of application connections can be handled by a single queue manager with sufficient memory, disk and CPU available.
Of course, each of these applications is ultimately connected to one or more queues so then the question becomes one of whether they can connect to the same queue.
Many listeners can listen on the same queue so long as they do not get exclusive access to it. Each will receive a portion of the messages arriving.
Listeners on QMgr-managed subscriptions are exclusively attached to a dynamic queue but multiple instances on the same topic will all receive the same messages.
If the queue is clustered and there is more than one instance of it multiple listeners will be required to get all the messages since they will normally be distributed by MQ workload distribution across those instances.
Yes, you can create as many listeners as you wish:
http://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.explorer.doc/e_listener.htm
However, there is no reason why two applications can't connect to the queue manager via the same listener (on the same port). What problem have you run into?
I'am trying to understand the topology of queues and exchanges MT creates in RabbitMQ.
I cannot get these two statements:
we generate an exchange for each queue so that we can do direct sends
to the queue. it is bound as a fanout exchange (is it about sending vs publishing?)
control queues are exclusive and auto-delete - they go away when you
go away and are not shared.
What for does MT need to send direct messages? Does this relate to control queues used by MT internally?
There is also no mentioning about dead letter queue, does that imply MT does not support one out of the box?
Oops, looked in a wrong place. It's here.
Michael Aldworth has described this in details in his excellent MassTransit Send vs. Publish blog post.
Essentially, for each MT endpoint, you get a exchange-queue pair, which is named after the queueName parameter value in the endpoint configuration. Topics, at the other hand, are RabbitMQ exchanges named after the message contract full type name.
Publish
You send a message to the topic, meaning to the message type exchange. MassTransit creates binding between the message type exchange and queue-name exchange on start up. In such way subscriptions work on the RabbitMQ levels. Publisher never knows who will be receiving the published message, if anyone at all.
Send
When sending, however, you need to specify the receiver address. By doing this you instruct MassTransit to deliver the message directly to the queue-name exchange. There is no binding between message-type exchange and queue-name exchange involved here. Therefore, the message will be delivered even if there is no consumer for this type of message at the target service. In such case the message will be moved to the deadletter queue (queue-name_Skipped).
How can you create a durable architecture environment using MQ Client and server if the clients don't allow you to persist messages nor do they allow for assured delivery?
Just trying to figure out how you can build a salable / durable architecture if the clients don't appear to contain any of the necessary components required to persist data.
Thanks,
S
Middleware messaging was born of the need to persist data locally to mitigate the effects of failures of the remote node or of the network. The idea at the time was that the queue manager was installed locally on the box where the application lives and was treated as part of the transport stack. For instance you might install TCP and WMQ as a transport and some apps would use TCP while others used WMQ.
In the intervening 20 years, the original problems that led to the creation of MQSeries (Now WebSphere MQ) have largely been solved. The networks have improved by several nines of availability and high availability hardware and software clustering have provided options to keep the different components available 24x7.
So the practices in widespread use today to address your question follow two basic approaches. Either make the components highly available so that the client can always find a messaging server, or put a QMgr where the application lives in order to provide local queueing.
The default operation of MQ is that when a message is sent (MQPUT or in JMS terms producer.send), the application does not get a response back on the MQPUT call until the message has reached a queue on a queue manager. i.e. MQPUT is a synchronous call, and if you get a completion code of OK, that means that the queue manager to which the client application is connected has received the message successfully. It may not yet have reached its ultimate destination, but it has reached the protection of an MQ Server, and therefore you can rely on MQ to look after the message and forward it on to where it needs to get to.
Whether client connected, or locally bound to the queue manager, applications sending messages are responsible for their data until an MQPUT call returns successfully. Similarly, receiving applications are responsible for their data once they get it from a successful MQGET (or JMS consumer.receive) call.
There are multiple levels of message protection are available.
If you are using non-persistent messages and asynchronous PUTs, then you are effectively saying it doesn't matter too much whether the messages reach their destination (although they generally will).
If you want MQ to really look after your messages, use synchronous PUTs as described above, persistent messages, and perform your PUTs and GETs within transactions (aka syncpoint) so you have full application control over the commit points.
If you have very unreliable networks such that you expect to regularly fail to get the messages to a server, and expect to need regular retries such that you need client-side message protection, one option you could investigate is MQ Telemetry (e.g. in WebSphere MQ V7.1) which is designed for low bandwidth and/or unreliable network communications, as a route into the wider MQ.