Configuring JMS over a Weblogic Cluster - jms

I have a setup of 2 WLS managed servers configured as part of a WLS cluster.
1) The requirement is to send requests to another system and receive responses using JMS as interface.
2) The request could originate from either of the Managed Servers. So the corresponding response should reach the managed server which originated the request.
3) The external system (to which requests are sent) should not be aware of how many managed servers are in the cluster (not a must have requirement)
How should JMS be configured for meeting these requirments?

Simple! Setup a response queue for each managed server and add a "reply-to" field in the messages you send to the other system. The other system will then ask the request where to send the reply. Deploy one Message Driven Bean (MDB) on each managed server (i.e. not on the cluster, one per managed server) to consume reply messages send to reply queues. Note that you might want to use clustered reply queues and persistent messages for load balancing and failover.
This is actually a combination of the Request-Reply and the Return Address patterns and is illustrated by the picture below:

Related

I need to build a Vert.x virtual host server that channels traffic to other Vert.x apps. How is this kind of inter-app communication accomplished?

As illustrated above, I need to build a Vert.x Java app that will be an HTTP server/virtual host (TLS Http traffic, Web socket traffic) that will redirect/channel specific domain traffic to other Vert.x Java apps running on the same server, each in it's own JVM.
I have been reading for days but I remain uncertain as to how to approach all aspects of the task.
What I DO know or have experience with:
Creating an HTTP server, etc
Using a Vert.x VirtualHost handler to "handle" incoming traffic for a
specific domain
What I DO NOT know:
How do I "re-direct" a domain's traffic to another Vert.x app (this
other Vert.x app would also be running on the same server, in its own
JVM).
- Naturally this "other" Vert.x app would need to respond to HTTP
requests, etc. What Vert.x mechanisms do I employ to accomplish this
aspect of the task?
Are any of the following concepts part of the solution? I'm unfamiliar with these concepts and how they may or may not form part of the solution.:
Running each Vert.x app using -cluster option?
Vert.x Streams?
Vert.x Pumps?
There are multiple ways to let your microservices communicate with each other, the fact that all your apps are running on the same server doesn't change much, but it makes number 2.) easy to configure
1.) Rest based client - server communication
Both host and apps have a webserver
When you handle the incoming requests on the host, you simply call another app with a HttpClient
Typically all services find each others address via service discovery.
Eg: each service registers his address in a central registry then other services use this central registry to find the addresses.
Note: this maybe an overkill for you and you can just configure the addresses of the other services.
2.) You start the vertx microservices in clustered mode
the eventbus is then shared among the services
For all incoming requests you send a broadcast on the eventbus
the responsible app replies to the message
For further reading you can checkout https://vertx.io/docs/vertx-hazelcast/java/#configcluster. You start your projects with -cluster option and define the clustering in an xml configuration. I think by default it finds the services via local broadcast.
3.) You use a message broker like RabbitMq etc.
All your apps connect to a central message broker
When a new request comes in to the host, it sends a message to the message broker
The responible app then listens to the relevant messages and replies
The host receives the reply from the message broker
There are already many existing vertx clients for certain message brokers like kafka, camel, zeromq:
https://github.com/vert-x3/vertx-awesome#integration

shared node wise queue

I am building a proxy server using Java. This application is deployed in docker container (multiple instances)
Below are requirements I am working on.
Clients send http requests to my proxy server
Proxy server forward those requests in the order it received to destination node server.
When destination is not reachable, proxy server store those requests and forward it when it is available in future.
Similarly when a request fails, request will be re-tried after "X" time
I implemented a node wise queue implantation (Hash Map - (Key) node name - (value) reachability status + requests queue in the order it received).
Above solution works well when there is only one instance. But I would like to know how to solve this when there are multiple instances? Is there any shared datastructure I can use to solve this issue. ActiveMQ, Redis, Kafka something of that kind (I am very new to shared memory / processing).
Any help would be appreciated.
Thanks in advance.
Ajay
There is an Open Source REST Proxy for Kafka based on Jetty which you might get some implementation ideas from.
https://github.com/confluentinc/kafka-rest
This proxy doesn’t store messages itself because kafka clusters are highly available for writes and there are typically a minimum of 3 kafka nodes available for Message persistence. The kafka client in the proxy can be configured to retry if the cluster is temporarily unavailable for write.

Implementing Request/Reply Pattern with Spring and RabbitMQ with already existing queues

Let me start by describing the system. There are 2 applications, let's call them Client and Server. There are also 2 queues, request queue and reply queue. The Client publishes to the request queue, and the server listens for that request to process it. After the Server processes the message, it publishes it to the reply queue, which the Client is subscribed to. The Server application always publishes the reply to the predefined reply queue, not a queue that the Client application determines.
I cannot make updates to the Server application. I can only update the Client application. The queues are created and managed by the Server application.
I am trying to implement request/reply pattern from Client, such that the reply from the Server is synchronously returned. I am aware of the "sendAndReceive" approach with spring, and how it works with a temporary queue for reply purposes, and also with a fixed reply queue.
Spring AMQP - 3.1.9 Request/Reply Messaging
Here are the questions I have:
Can I utilize this approach with existing queues, which are managed and created by the Server application? If yes, please elaborate.
If my Client application is a scaled app (multiple instances of it are running at the same time), then how do I also implement it in such a way, that the wrong instance (one in which the request did not originate) does not read the reply from the queue?
Am I able to use the "Default" exchange to my advantage here, in addition to a routing key?
Thanks for your time and your responses.
Yes; simply use a Reply Listener Container wired into the RabbitTemplate.
IMPORTANT: the server must echo the correlationId message property set by the client, so that the reply can be correlated to the request in the client.
You can't. Unlike JMS, RabbitMQ has no notion of message selection; each consumer (in this case, reply container) needs its own queue. Otherwise, the instances will get random replies and it is possible (highly likely) that the reply will go to the wrong instance.
...it publishes it to the reply queue...
With RabbitMQ, publishers don't publish to queues, they publish to exchanges with a routing key. It is bad practice to tightly couple publishers to queues. If you can't change the server to publish the reply to an exchange, with a routing key that contains something from the request message (or use the replyTo property), you are out of luck.
Using the default exchange encourages the bad practice I mentioned in 2 (tightly coupling producers to queues). So, no, it doesn't help.
EDIT
If there's something in the reply that allows you to correlate it to a request; one possibility would be to add a delegating consumer on the server's reply queue. Receive the reply, perform the correlation, route the reply to the proper replyTo.

All JMSs Message from Distributed Queue across the Cluster

Currently using WebLogic and Distributed Queues. And I know from the documentation that Distributed Queues allow you to retrieve a connection to any of the Queues across a cluster by using the Global JNDI name. It seems one of the main pieces of functionality Distributed Queue gives you is load balanced connections across multiple managed servers. So we have 4 Managed Servers (two on each physical, that communicate over multicast), and each Managed Server has an individual JMS Server which is configured to it's own Data Store.
I am 99% certain I already know the answer to this, but it appears that if you wanted to do a Consume a message off of a Queue, and that Queue exists on each Mgd Server in the Cluster, you cannot technically pull a Message off of any of the Queues (you can only pull the Message off the Queue to which you are connected to). So if I have a Message on Mgd Server 4, and I connect to Mgd Server 1, I won't see the messages on the Queue from Mgd Server 4.
So is there a way in Java EE or WLS to consume a message from all the nodes of a Queue (across the Cluster). Like a view into every instance of the Queue on each Mgd Server? It doesn't appear so and the documentation makes it seem like this is not possible, as well as this video (around minute 5):
http://www.youtube.com/watch?v=HAKixK_wp0Q
No you cannot consumer a message that is delivered to one managed server when your client is connected to another managed server of the same cluster.
Here's how it works.
When using UDT, wls provides a JNDI name that resolves internally into 4 distinct JNDI names for each of the managed server, the JMS servers on each of the managed servers are distinct.
When using the UDQ JNDI name when you post a message, it gets to one of the 4 managed servers using the algorithm you chose and other configuration done in your connection factory.
When a message consumer listens to the UDQ it gets pinned to the JMS server on one of the managed servers. It has no visibility about messages in the other servers.
Usually UDQ is used in scenarios where you want the message to be consumed concurrently by more than one managed server. You would normally deploy a MDB to the cluster, meaning the MDB will be deployed to each of the managed server and each of these will be able to consume the messages from their local JMS server.
I believe you can if your message store is config'd to use a database. If so, then I would think removing an item from the queue would remove it from the shared db table. I.e. all JMS servers are pointing to the same db instance and table. That should be pretty easy to test, too.

Design of queues with JMS & QPID

I am currently assigned the task of writing an application which will use JMS API to communicate using Apache qpid as the JMS provider.
so basically my application will have multiple instances of a server running.Each server will serve a set of unique desks.so each instance will only have data for the desks it is serving.
There will also be multiple instances of clients each configured by desk again.
Now when the client starts up, it will request the data for the desk it is serving to the servers.The request should only go to the server that has that desk data loaded and the response should only go back to the client who requested the data for that desk.
I am thinking of using queues for this.i am not sure if i should create only one request queue which will be used by all the servers or i should create seperate queues for each server.
For response ,I am planning to use temporary queues.
Not that the request from the client to the server is not very often.Say each client may send around 50 requests a day.
can someone please tell me if this is a good design?

Resources