RabbitMQ queue gets deleted immediately after creation. Why? - spring-boot

I'm trying to deploy Spring Boot microservices applications producing and consuming data using RabbitMQ on K8s Cluster in Azure AKS.
When I run producer application and produce a message to the queue through POSTMAN, I get 200 OK response but in RabbitMQ management UI, I get no queues and in the RabbitMQ container logs I see below error
o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'employeeexchange' in vhost '/', class-id=60, method-id=40)
Not able to figure out what I'm doing wrong.
If you have any idea (or need any kind of additional information), let me know.

You can use below to create a queue
#Bean
Queue queue() {
return new Queue(String name, boolean durable, boolean exclusive, boolean autoDelete)
Parameters:
name - the name of the queue.
durable - true if we are declaring a durable queue (the queue will survive a server restart)
exclusive - false if we are not declaring an exclusive queue (the queue will only be used by the declarer's connection)
autoDelete - false if the server should not delete the queue when it is no longer in use

Related

How to configure Spring Amqp #RabbitListener not to throw exceptions when queue does not exist?

This is my #RabbitLister code:
#RabbitListener(queues = device.*)
I want this listener to listen all the queues, created by devices on my broker where * is ID of device, like:device.1
Currently, when i start my app and queues are not created, i'm getting an exception:
ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'device.*' in vhost '/'
What i'm doing wrong?
Wildcards/patterns in queue names are not supported by AMQP/RabbitMQ.
Devices should only send messages to exchanges with routing keys; consumers are responsible for queues not producers.
Use a topic exchange with routing keys device.1 etc and bind a single queue with routing key device.#.

How to create temp queue in Spring Websocket

While using Spring full-featured message broker using STOMP, it creates a queue for each new session, and by default the queue is durable. Is there a way to make queues per session non-durable(auto-delete).
I have tried adding headers like {'durable': false,'auto-delete': true} in SUBSCRIBE frame and, spring gives an error of MessageBroker not Active.

Spring JMS 4.3.2 + Jboss EAP 6.4.8 + Webmethods Jms Broker 8.2 + Durable shared topic subscription

I'm trying to subscribe to a topic using durable and shared enabled, so that multiple instance can be connected to a topic to increase the scalability.
However, only the first instance getting connected without any errors, the second instance message listener keeps throwing the below error messages. I checked with my Webmethods counterpart and he found that the client state was disabled and that's why second listener was not able to connect using the same subscription name.
Can someone throw light on this issue please.
18:14:15,050 WARN
[org.springframework.jms.listener.DefaultMessageListenerContainer]
(DefaultMessageListenerContainer-145) Setup of JMS message listener
invoker failed for destination 'topicName' - trying to recover. Cause:
[BRM.10.2209] JMS: Durable subscription
"connectionFactory##subscriptionName" is in use.
The message
JMS: Durable subscription "connectionFactory##subscriptionName" is in use.
typically hints at a misconfiguration of your Topic on Broker. Please check (with MWS) that the Topic really has "Shared State=true“:
Then make sure your Connection Factory has a „Connection Factory Client ID“ set:
And finally you should set the following JVM setting:
-Dcom.webmethods.jms.clientIDSharing=true

How does a JMS consumer work on a broker CLUSTER of Oracle Message Queue?

We have an application running in a GlassFish 3.1.2.2 cluster (two instances) that writes its results to "the_output_queue".
GlassFish sets up Message Queue as an embedded broker cluster, which in turn has also two message broker instances corresponding directly to the two GlassFish instances.
Now I would like to consume results from the_output_queue with an external JMS client (think Android app).
I assumed that a broker cluster can somehow be accessed transparently by a JMS client, but I cannot get this to work. I only succeed in connecting a JMS client to one individual broker.
If I have one JMS client running, connected to one broker I get only half of the messages. The physical queue (the_output_queue) defined in the GlassFish Administration Console exists in both brokers and the messages get evenly distributed thanks to load balancing.
This text from the Oracle manuals sounds to me like every message should be available in all broker instances of the cluster, i.e. if only a single JMS consumer is running it should receive all messages irrespective of the broker instance it is connected to.
"The home broker is responsible for routing and delivering the messages to all consumers of the destination, whether these consumers are local (connected to the home broker) or remote (connected to other brokers in the cluster)."
Have I misunderstood this completely?
Can a JMS client access a Oracle Message Queue broker cluster transparently?
How would the connection string look?
Is there some "global cluster target" (instead of an individual broker) to which the JMS client can connect? Where could I find the connection details for the cluster?
Is there something special in the GlassFish setup I have to verify? The settings currently are (default setup created by jelastic.com, looks good to me):
JMS Availability:
JMS Service Type: Embedded
JMS Cluster Type: Conventional
JMS Configuration Store Type: Master Broker
JMS Message Store Type: File
GMS is enabled
Answer to the main question: Yes, a JMS client can connect to any instance of a cluster and GlassFish will replicate the messages. I have tested it on my PC.
The problem in Jelastic is discussed in this posting.

single jms consumer for multiple jms servers

I am using a distributed jms queue and weblogic is my app server. There are three jms servers deployed in my clustered enviroment. The producers just send the message using name of queue jndi lookup 'udq' for example. Now I have associated a consumer for each jms server and I was able to consume the message, no problem so far.
Here is question, can I have a single consumer to consume the messages from the 3 jms servers. The weblogic allows jndi naming for destination lookup with following syntax #
qsession1 = qcon1.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
qsession2 = qcon2.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
qsession3 = qcon3.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
queue1 = (Queue)ctx.lookup("JMSServer-1#UDQ");
queue2 = (Queue)ctx.lookup("JMSServer-2#UDQ");
queue3 = (Queue)ctx.lookup("JMSServer-3#UDQ");
qreceiver1 = qsession1.createReceiver(queue1);
qreceiver2 = qsession2.createReceiver(queue2);
qreceiver3 = qsession3.createReceiver(queue3);
qreceiver1.setMessageListener(this);
qreceiver2.setMessageListener(this);
qreceiver3.setMessageListener(this);
qcon1.start();
qcon2.start();
qcon3.start();
I have only one OnMessage implemented for the above consumer. This does not work. Any suggestions please..
No, you can't have a consumer receiving messages from more than one JMS server. A consumer can receive messages from only one JMS server. You need to create multiple consumers to receive messages from multiple JMS servers.
you can use "Queue Forwarding" function.
From online documentation of wls 10.3:
"Queue members can forward messages to other queue members by configuring the Forward Delay attribute in the Administration Console, which is disabled by default. This attribute defines the amount of time, in seconds, that a distributed queue member with messages, but which has no consumers, will wait before forwarding its messages to other queue members that do have consumers."
here a link: http://docs.oracle.com/cd/E13222_01/wls/docs103/jms/dds.html#wp1260816
You can configure this feature from administration of each single queue.

Resources