RabbitMQ - Exchange/Queues dissapearing - ruby

I am using the Bunny implementation of RabbitMQ messaging
I have a bash script that provisions a durable topic exchange, a number of durable queues and binds them.
I am using Bunny on both client and server side to send messages between them.
However, I find that on terminating either connection (client/server) that my exchange and queues dissapear. I would like to configure it so that even if the server side fails, the client can still push messages to the queue and they will be processed once the server is back online.
Is this possible with Bunny/RabbitMQ?

Related

Minimizing bandwidth on multiple Tibco brokers / destinations

My experience with setting up Tibco infrastructure is minimal, so please excuse any misuse of terminology, and correct me where wrong.
I am a developer in an organization where I don't have access to how the backend is setup for Tibco.  However we have bandwidth issues between our regional centers, which I believe is due to how it's setup.
We have a producer that sends a message to multiple "regional" brokers.  However these won't always have a client who needs to subscribe to the messages.
I have 3 questions around this:
For destination bridges: https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-174DF38C-4FDA-445C-BF05-0C6E93B20189.html
Is a bridge what would normally be used, to have a producer send the same message to multiple brokers/destinations or is there something else?
It's not clear in the documentation, if a bridge exists to a destination where there is no client consuming a message, does the message still get sent to that destination?  I.e., will this consume bandwidth even with no client wanting it?
If the above is true (and messages are only sent to destinations with a consumer), does this apply to both Topics and Message Selectors?
Is a bridge what would normally be used, to have a producer send the same message to multiple brokers/destinations or is there something else?
A bridge can be used to send messages from one destination to multiple destinations (queues or topic).
Alternatively Topics can be used to send a message to multiple consumer applications. Topics are not the best solution if a high level of integrity is needed(no message losses, queuing, etc).
It's not clear in the documentation, if a bridge exists to a destination where there is no client consuming a message, does the message still get sent to that destination? I.e., will this consume bandwidth even with no client wanting it?
If the bridge destination is a queue, messages will be put in the queue.
If the bridge destination is a Topic, messages will be distributed only if there are active consumers applications (or durable subscribers).
3 If the above is true (and messages are only sent to destinations with a consumer), does this apply to both Topics and Message Selectors?
This applies only to Topics (when there is no durable subscriber)
An alternative approach would be to use routing between EMS servers. In this approach Topics are send to remote EMS servers only when there is a consumer connected to the remote EMS server (or if there is a durable subscriber)
https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-FFAAE7C8-448F-4260-9E14-0ACA02F1ED5A.html

Enable WebSocket Connections with multiple Pods in Spring Boot Application

I'm using a web-socket protocol in my spring boot application. There are multiple pods used, to handle heavy traffic. Now, having multiple pods is causing an issue. Let me brief it a bit,
Let's assume there are 2 pods (Pod 1, Pod 2). Angular UI is subscribing to spring boot application on the web-socket protocol, let's say via Pod 1. Now, the spring boot application sends a message to the UI, let's say its send via Pod 2, and this message is getting dropped (never reach the UI) since the web-socket connection was established via Pod 1.
Because of this, some messages are getting dropped, which are being sent to UI by other Pods (which were not used for the initial subscription process), and messages send via Pod which was used initially for subscription, only those messages are received at UI.
How to tackle this scenario, so that every message is send to UI in this multiple pods environment?
The solution to multiple pod issues is by using an external message broker (like RabbitMq, ActiveMq), instead of an in-memory message broker (default behavior).
You may face the below issues while implementing this (writing them down in one place so that you don't have to struggle much as I did 🙂),
Creating Auto-Delete Queues
When using external message broker, you might observe that the queues are created for every websocket connection, but they are not deleted when the websocket connection is over. We don't even need these queues. Hence come the need of Auto-Delete Queues. These auto-queues are automatically deleted when websocket connection is closed. How to declare auto-delete queues, its easy peasy
When using user destinations with an external message broker, check the broker documentation on how to manage inactive queues, so that when the user session is over, all unique user queues are removed. For example, RabbitMQ creates auto-delete queues when destinations like /exchange/amq.direct/position-updates are used. So in that case the client could subscribe to /user/exchange/amq.direct/position-updates. Similarly, ActiveMQ has configuration options for purging inactive destinations.
In simple terms, websocket client and websocket server should use /exchange/amq.direct/<anything> this exchange destination.
For more info, read the official docs
ssl/stomp protocol on Cloud instance
Another issue you might face when you are hosting you application to AWS or Azure or Google Cloud, is that they use ssl/stomp protocol, so you code which works fine in your local machine (since it uses stomp protocol) doesn't work fine in Cloud.
Broadcasting message from one pod to other pods
This issue is the same as written in this Stackoverflow question. [refer the question for clearance]
Now, lemme put up the code snippet and will add comments to indicate which part of snippet fixes which issue. Add it inside your configureMessageBroker method,
val tcpClient = new ReactorNettyTcpClient<>
(TcpClient.create()
.port(yourRabbitmqCloudStompPort)
.host(yourRabbitmqCloudHost)
.secure(SslProvider.defaultClientProvider()),
new StompReactorNettyCodec());
messageBrokerRegistry
// enables stompbroker, instead of in-memory broker
.enableStompBroker("/queue", "/topic", "/exchange")
.setClientLogin(yourRabbitmqCloudClientLogin)
.setClientPasscode(yourRabbitmqCloudClientPasscode)
.setSystemLogin(yourRabbitmqCloudSystemLogin)
.setSystemPasscode(yourRabbitmqCloudSystemPasscode)
// broadcast msg to every pod
.setUserDestinationBroadcast("/topic/unresolved-user-destination")
.setUserRegistryBroadcasr("/topic/user-registry")
// enables ssl/stomp protocol
.setTcpClient(tcpClient);

Can you have a backup durable subscriber for ActiveMQ

I have a master/slave AMQ broker setup for JMS messaging. I have two servers that I would like to setup as a master/slave durable consumers using Apache Camel. We've been achieving this by having both servers attempt to connect with the same client ID. One node handles all of the work but if it goes down the other node connects and picks right back up on the work. This has been working fine for having a single consumer at a time but it makes noise in disconnected server's log files with the message
ERROR org.apache.camel.component.jms.DefaultJmsMessageListenerContainer]
(Camel (spring-context) thread #0 - JmsConsumer[global.topic.event]) Could
not refresh JMS Connection for destination 'global.topic.event' - retrying
using FixedBackOff{interval=5000, currentAttempts=12,
maxAttempts=unlimited}. Cause: Broker: broker - Client: client already
connected from tcp://xxx.xx.xx.xxx:xxxx
Is there a proper way to get the functionality that I'm looking to achieve? I was considering having the slave server ping the master to coordinate which one is connected but I'd like to keep the implementation as simple as possible.
Convert your usage of topics on the consumer side to Virtual Topics. Virtual Topics allow you to continue to have existing message flows produce and consume from the topic, but also have consumers listen on specially named queues.
Once you are consuming from a queue, you can implement all the consumer patterns-- exclusive consumer (which allows that hot-standby backup consumer), message groups, parallel consumers, etc.

MQ Input/Output count increasing when Datapower client is connect using MQ front side handler

I am using MQ 7.5.0.2 and Datapower client IDG7
When MQ send messages to Datapower, Datapower receive those messages using MQ front side handlers and also same way it do send messages using Backend URL
But the problem I am facing it when ever Datapower connects to MQ, Queue Input/Output count increases to (10 ~20) and remains same and the Handle state is INACTIVE.
When I see queue details using below commands it is displaying as below
display qstatus(******) type(handle)
QUEUE(********) TYPE(HANDLE)
APPLDESC(WebSphere MQ Channel)
APPLTAG(WebSphere Datapower MQClient)
APPLTYPE(SYSTEM) BROWSE(NO)
CHANNEL(*****) CONNAME(******)
ASTATE(NONE) HSTATE(INACTIVE)
INPUT(SHARED) INQUIRE(NO)
OUTPUT(NO) PID(25391)
QMURID(0.1149) SET(NO)
TID(54)
URID(XA_FORMATID[] XA_GTRID[] XA_BQUAL[])
URTYPE(QMGR)
Can any one help me in this.It only clearing when ever i restart the queue manager but I dont want to restart the qmgr every time.
HSTATE in INACTIVE state indicates "No API call from a connection is currently in progress for this object. For a queue, this condition can arise when no MQGET WAIT call is in progress.". This is likely to happen if the application(DP in this case) opened the queue and then not issuing any API calls on the opened object. Pid 25391 - is this for an amqrmppa process? Is DP expected to consume messages on this queue continuously?

jms order of message delivery with high availability

I have set up uniform distributed queue with weblogic server 12c. I am trying to achieve order of delivery and high availability with jms distributed queue. In my prototpe testing deployment I have two managed servers in the cluster, let us say managed_server1 and managed_server2. Each of this managed server hosts jms server namely jms server1 and jms server2. I have configured the jms servers with jdbc persistent store. I have enabled server affinity.
I have a producer running such as java queuproducer t3::/managed_server1. I send out 4 messages. From the weblogic monitoring console I see there are 4 messages in the queu since there are no consumers to the queue yet.
Now I shut down managed_server1.
Bring up a consumer to listen on java queuconsumer t3://managed_server2. This consumer cannot consume message since the producer send all the messages to jms server1, and it is down.
Bring up managed server 1, start a consumer to listen to t3://managed_server1 I can get all messages.
Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed.
I am a little lost, am I missing something. Can unit of order help me to overcome this. Or should I use distributed topic instead of distributed queue, where all the jms server will receive all the messages from producers, but if one jms server where my consumre is listening fails there is only one consumer in my application, when I switch over to other jms server, I might be starting to get messages from the beginning not from where I left off.
Any suggestions regarding the same will be helpful.
Good Question !
" Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. "
Ans - no you do not loose all your messages, they are stored in the JDBC store configured for the JMS server deployed on managed server 1. If you want the Messages sent to managed_server1 to be consumed from managed_server2 you need to configure JMS migration.
" Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed. Can unit of order help me to overcome this."
Ans - If you want the messages to be consumed strictly in a certain order, then you will have to make use of unit of order (UOO). when messages are sent using UOO, they are sent to one of the several UDQ destinations, if midway that destination fails, and migration is enabled the messages are migrated to the next UDQ destination and new UDQ messages are also delivered to the new destination.
Useful links -
http://www.youtube.com/watch?v=B9J7q5NbXag
http://www.youtube.com/watch?v=_W3EJ8p35lI
Hope this helps.

Resources