Are AMQP connections (RabbitMQ) between Cloundfoundry applications possible? - jms

I have two applications deployed on Cloudfoundry: a service application that computes stuff (aka computeService) and a client application that renders html for us mortals to hit buttons on (aka clientService). I would like a controller in the clientService to send commands to the computeService (when mortals hit buttons). The broker and the computeService run on the same machine.
I know I cannot make remote AMQP connections into a service on cloudfoundry.com, but I assume I can make connections between applications. However, every sensible address combination for broker and clientService gives me the same error:
javax.jms.JMSException: Could not connect to broker URL: tcp://127.0.0.1:61616. Reason: java.net.ConnectException: Connection refused
Whatever address I try, I cannot post to the queue. The code works flawlessly on my local machine.
My question: can I use RabbitMQ to pass messages between the two applications on Cloudfoundry? And if so, which addresses should I use?
Thanx!

One way to try this out is to create two replicas of the rabbit message example at Spring Samples
...a message sender and a message receiver. When deployed, they should share the same rabbit service.
I pushed the rabbit message which worked for me to: rabbitmessage-sndrcv

Related

Working of websocket services in clustered deployment

Lets say I have a websocket implemented in springboot. The architecture is microservice. I have deployed the service in kubernetes cluster and I have 2 running instance of the service, the socket implementation is using stomp and redis as broker.
Now the first connection is created between a client and one of the service. Does all the data flow occur through the client and the connected service? Would the other service also have a connection? Incase the current service goes down would the other service open up a connection?
Now lets say I'am sending some data back to the client which comes through a kafka topic. One of the either service could read it. If then would either of them be able to send the data back to the client?
Can someone help me understand these scenarios?
A websocket is a permanent connection. After opening it, it will be routed through kubernetes to a fixed pod. No other pod will receive the connection.
If the pod goes down, the connection is terminated.
If a new connection is created, for example by a different user, it may be routed to a different pod.
What data is transmitted, for example with kafka as source, is not relevant in this context. It could be anything.

Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection?

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?
The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

Spring websockets + Amazon MQ limitations

We want to use spring websockets + STOMP + amazon MQ as a full featured message broker. We were trying to do benchmarking, to find out how many client websocket connections single tomcat node can handle. But it appears that we hit amazonMQ connection limit first. As per the aws documentation, amazonMQ has a limit of 1000 connections per node (as far as I understand we can ask support to increase the limit, but I doubt that it can be increased big time). So my questions is:
1) Am I correct in assuming that for every websocket connection from client to spring/tomcat server, a corresponding connection being opened from server to broker? Is this correct behavior or we're doning something wrong here/missing something?
2) What can be done here? I mean I don't think this is a good idea to create broker node per evry 1000 users..
According to https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/simp/stomp/StompBrokerRelayMessageHandler.html your are doing everything right, and it is documented behavior.
Quote from javadoc:
For each new CONNECT message, an independent TCP connection to the broker is opened and used exclusively for all messages from the client that originated the CONNECT message. Messages from the same client are identified through the session id message header. Reversely, when the STOMP broker sends messages back on the TCP connection, those messages are enriched with the session id of the client and sent back downstream through the MessageChannel provided to the constructor.
As for a fix, you can write your own message broker relay, with tcp connection pooling.

SpingBoot mq web app stops consuming messages

I noticed strange problem in my web app.
Web app is hosted on AWS, is made in SpringBoot framework and its main responsibility is consuming messages from MQ. Messages are consumed one by one - there is no concurrency.
I am using MQQueueConnectionFactory class to prepare my connection,
I properly set parameters like:
host
port
queue manager
channel
int property
client reconnect options
and all ssl required parameters.
All seems to work well all the time until something happen and messages from MQ are not consumed. I can see messages which are waiting for being consumed, but web app does not consume it. After web app is restarted all work fine. I think it can be problem with connection which seems to be broken after some time (several days). I set reconnect option to connection factory class and for now I do not know what to do else to fix it.
Have you ever met similar problem? Do you have any clues?

JMS with clustered nodes

I have two clustered managed servers running on Weblogic, and seperate JMS server1 and server2 are running on each managed server. The problem is in application properties file, we only hardcoded and pass JMS server1 JNDI name to the application. So both applications running on each node actually only uses one fixed JMS server, which is not truly distributed and clustered. If JMS server 1 is down, the whole application will be down.
My question is how to let application dynamically find JMS server in above senario? Can you please point me a direction? Thanks!
It's in the Weblogic docs at: http://docs.oracle.com/cd/E14571_01/web.1111/e13738/best_practice.htm#CACDDFJD
Basically you created a comma separated list of servers and the JMS connection logic should be automatically able to handle to case when one of the servers is down:
e.g.
t3://hostA:7001,hostB:7001
When you use a property like jms.jndi.provider.url=t3://hostA:31122,hostA:31124
it tells wls to connect to either hostA:31122 or hostA:31124.
Note your JMS client is connected to only one Host at any given time.
when you shutdown hostA the connection between JMS client and server is cut abruptly resulting in an exception, your code will have to handle this exception gracefully and attempt to connect to WLS again periodically to ensure it connects to hostB.
WLS internally will round robin the request if more than 1 instance of the JMS client is running.
When using MDB as JMS client and deploying it to a cluster and using such a url 1 mdb instance would connect to one host and the other instance would connect to another host. MDB also inherently has the ability to reconnect periodically to the JMS destination.
A easy solution to your problem could be to
1) Set the jms.jndi.provider.url=t3://hostA:31122,hostA:31124
2) Have 2 instance of the JMS client code running, so one will connect to port 31122 and other to 31124
3) Set Forward-Delay on the JMS Queue so that message dont remain in queue without getting consumed for long and get forwarded to the other queue which has an active consumer.
I am updating my progress here instead of adding more comments. I have tested using a standalone JMS client by changing properties file from t3://hostA:7001 to t3://hostA:7001,hostB:7001 for JMS provider. The failover is automatically handled by WLS. No code change. The exception I got above is caused by using wlclient.jar, it is working after it changed to wlfullclient.jar.
I followed this link to generate wlfullclient.jar.
Thanks everyone!

Resources