Spring Integration #MessagingGateway multiple replies for a single request - spring

I am trying to create an application able to subscribe to external legacy/non-spring server, with the use of Spring Integration. I am using both AbstractServerConnectionFactory and AbstractClientConnectionFactory. The problem is that after I send the message which enables subscription I receive more than one reply (one ack for subscription, then a message every x minutes containing subscription data), for a single request. It seems like #Gateway is not suitable for such case, so I tried to somehow redirect the replies to #MessageEndpoint by setting the request channel on TcpInboudGateway and reply channel TcpOutboundGateway to be the same. This did not help and I can not get rid of TcpOutboundGateway Cannot corelate response - no pending reply for .... I tried to remove CachingClientConnectionFactory and use pure AbstractClientConnectionFactory, but it also did not help. Every single request and single response calls are working fine. Also, I am able to send any packets to my app and they are handled just fine.
Tried to solve this for many days, but I am still stuck with it. After all this time I assume it must be the problem with #MessagingGateway and #Gateway methods within it not being able to handle more than one reply, nor delegating it to #MessageEndpoint. Is there any way to get rid of #MessagingGateway and using something capable of doing mentioned operation? Maybe there is a way of using #Header to free the #Gateway methods for handling any replies? Searched in Spring Integration samples on github and in the documnetation, but did not find answers for those questions.

Gateways are specifically designed for 1 request/1 reply messaging.
To achieve arbitrary bi-directional messaging between client and server, you can use a pair of Collaborating Channel Adapters.
... You can also use collaborating adapters (server-side or client-side) for totally asynchronous communication (rather than with request-reply semantics). On the server side, message correlation is automatically handled by the adapters, because the inbound adapter adds a header that allows the outbound adapter to determine which connection to use when sending the reply message. ...

Related

Implementing Request/Reply Pattern with Spring and RabbitMQ with already existing queues

Let me start by describing the system. There are 2 applications, let's call them Client and Server. There are also 2 queues, request queue and reply queue. The Client publishes to the request queue, and the server listens for that request to process it. After the Server processes the message, it publishes it to the reply queue, which the Client is subscribed to. The Server application always publishes the reply to the predefined reply queue, not a queue that the Client application determines.
I cannot make updates to the Server application. I can only update the Client application. The queues are created and managed by the Server application.
I am trying to implement request/reply pattern from Client, such that the reply from the Server is synchronously returned. I am aware of the "sendAndReceive" approach with spring, and how it works with a temporary queue for reply purposes, and also with a fixed reply queue.
Spring AMQP - 3.1.9 Request/Reply Messaging
Here are the questions I have:
Can I utilize this approach with existing queues, which are managed and created by the Server application? If yes, please elaborate.
If my Client application is a scaled app (multiple instances of it are running at the same time), then how do I also implement it in such a way, that the wrong instance (one in which the request did not originate) does not read the reply from the queue?
Am I able to use the "Default" exchange to my advantage here, in addition to a routing key?
Thanks for your time and your responses.
Yes; simply use a Reply Listener Container wired into the RabbitTemplate.
IMPORTANT: the server must echo the correlationId message property set by the client, so that the reply can be correlated to the request in the client.
You can't. Unlike JMS, RabbitMQ has no notion of message selection; each consumer (in this case, reply container) needs its own queue. Otherwise, the instances will get random replies and it is possible (highly likely) that the reply will go to the wrong instance.
...it publishes it to the reply queue...
With RabbitMQ, publishers don't publish to queues, they publish to exchanges with a routing key. It is bad practice to tightly couple publishers to queues. If you can't change the server to publish the reply to an exchange, with a routing key that contains something from the request message (or use the replyTo property), you are out of luck.
Using the default exchange encourages the bad practice I mentioned in 2 (tightly coupling producers to queues). So, no, it doesn't help.
EDIT
If there's something in the reply that allows you to correlate it to a request; one possibility would be to add a delegating consumer on the server's reply queue. Receive the reply, perform the correlation, route the reply to the proper replyTo.

spring-amqp not work correctly when connections are blocked

I am using spring-amqp 1.4.4 and after queue contains too much messages and it is above watermark memory, RabbitTemplate receive method don't response if it was called after send method. It is wait indefinitely. And in spring xml I set reply-timeout="10" to rabbit:template. If i not call send method and simply call receive it work good. What's wrong?
template.convertAndSend("test message");
String msg = (String) template.receiveAndConvert("log.queue"); // receiveAndConvert not response
The rabbitmq guys recommend using separate connections for publishers and consumers, for exactly this reason.
The spring amqp CachingConnectionFactory shares a single connection for all users.
We are looking at providing an option to use two connections but, in the meantime, you can configure two connection factories (and templates), one for sends and the other for receives.

Which is better: multiple web socket endpoints or single web socket endpoint in Java EE7

Java EE 7 allows you to create new endpoints very easily through annotations. However, I was wondering is having multiple endpoints one to handle each message type a good idea or should I have just one endpoint facade for everything?
I am leaning towards having one single end-point facade based on the theory that each endpoint creates a new socket connection to the client. However, that theory could be incorrect and Web Socket may be implemented so that it will use just one TCP/IP socket connection regardless of how many web socket end points are connected so long as they connect to the same host:port.
I am asking specifically for Java EE 7, as there may be other web socket server implementations that may do things differently.
Just noticed an ambiguity on my question re: message types. When I say message types I meant different kinds of application messages not native message types such as "binary" or "text". As such I marked #PavelBucek answer as the accepted one.
However, I did try an experiment with Glassfish and having two end points. My suspicions were correct and that there is a TCP connection established per connected endpoint. This would cause more load on the server side if there is more than one websocket endpoint being used on a single page.
As such I concluded that there should be only one endpoint to handle the application messages provided that everything is a single native type.
This would mean that the application needs to do the dispatching rather than relying on some higher level API to do it for us.
The only valid answer here is the latter option - having multiple endpoints.
See WebSocket spec chaper 2.1.3:
The API limits the registration of MessageHandlers per Session to be one MessageHandler per native websocket message type. [WSC 2.1.3-1] In other words, the developer can only register at most one Mes- sageHandler for incoming text messages, one MessageHandler for incoming binary messages, and one MessageHandler for incoming pong messages. The websocket implementation must generate an error if this restriction is violated [WSC 2.1.3-2].
As for using or not using multiple TCP connections - AFAIK currently there will be new connection for every client and there is no easy way how you can force anything else. WebSocket multiplexing should solve it, but I don't think any WebSocket API implementation support it (I might be wrong..)

Can anyone explain the request-reply broker zeromq example?

I'm refering to the 'A Request-Reply Broker' in the Zeromq documentation: http://zguide.zeromq.org/chapter:all
I'm getting the general gist of the app: it acts like an intermediary and routes messages from the client to the server and back again.
What I'm not getting though is how it makes sure the correct response from a server is sent to the correct client which originally made the request. I don't see anything in the code example which makes sure about this.
Now in the example they only send 1 message (hello) and 1 response (world), so even if messages are mixed up it doesn't matter, but I'm guessing that the testclient and server are kept deliberately simple.
Any thoughts are welcome...
All zeromq sockets implicitly have an identity associated with them. (You can obtain this identity with zmq_getsockopt().)
For bi-directional socket types not XREQ or XREP, this identity is automatically transferred as part of every message sent over the socket. The REP socket uses this identity to route the response message back to the appropriate socket. This has the effect of automatic routing.
Under the hood, identities are transferred via multipart messages. The first message in a multipart message will contain the socket identity. An empty message will follow, followed by all messages specified by the user. The REQ and REP sockets deal with these prefixed messages automatically. However, if you are using XREQ or XREP sockets, you need to populate these identity messages yourself.
If you search for "identity" on the ZMQ Guide, you should find all the details you will ever want to know about how identities and socket routing works.
Ok in chapter 3 they all of a sudden explain that there is an underlying concept of an 'envelope' which the req/resp pattern invisubly uses.
This explains how it works.

How would I create an asynchronous notification system using RESTful web services?

I have a Java application which I make available via RESTful web services. I want to create a mechanism so clients can register for notifications of events. The rub is that there is no guarantee that the client programs will be Java programs and hence I won't be able to use JMS for this (i.e. if every client was a Java app then we could allow the clients to subscribe to a JMS topic and listen there for notification messages).
The use case is roughly as follows:
A client registers itself with my server application, via a RESTful web service call, indicating that it is interested in getting a notification message anytime a specific object is updated.
When the object of interest is updated then my server application needs to put out a notification to all clients who are interested in being notified of this event.
As I mentioned above I know how I would do this if all clients were Java apps -- set up a topic that clients can listen to for notification messages. However I can't use that approach since it's likely that many clients will not be able to listen to a JMS topic for notification messages.
Can anyone here enlighten me as to how this problem is typically solved? What mechanism can I provide using a RESTful API?
I can think of four approaches:
A Twitter approach: You register the Client and then it calls back periodically with a GET to retrieve any notifications.
The Client describes how it wants to receive the notification when it makes the registration request. That way you could allow JMS for those that can handle it and fall back to email or similar for those that can't.
Take a URL during the registration request and POST back to each Client individually when you have a notification. Hardly Pub/Sub but the effect would be similar. Of course you'd be assuming that the Client was listening for these notifications and had implemented their Client according to your specs.
Buy IBM WebSphere MQ (MQSeries). Best IBM product ever. Not REST but it's great at multi-platform integration like this.
We have this problem and need low-latency asynchronous updates to relatively few listeners. Our two alternative solutions have been:
Polling: Hammer the list of resources you need with GET requests
Streaming event updates: Provide a monitor resource. The server keeps the connection open. As events occur, the server transmits a stream of event descriptions using multipart content-type or chunked transfer-encoding.
In the response to the RESTful request, you could supply an individualized RESTful URL that the client can monitor for updates.
That is, you have one URL (/Signup.htm, say), that accepts the client's information (id if appropriate, id of object to monitor) and returns a customized url (/Monitor/XYZPDQ), where XYZPDQ is a UUID created for that particular client. The client can poll that customized URL at some interval, and it will receive a notification if the update occurs.
If you don't care about who the client is (and don't want to create so many UUIDs) you could just have separate RESTful URLs for each object that might want to be monitored, and the "signup" URL would just return the correct one.
As John Saunders says, you can't really do a more straightforward publish/subscribe via HTTP.
If polling is not acceptable I would consider using web-sockets (e.g. see here). Though to be honest I like the idea suggested by user189423 of multipart content-type or chunked transfer-encoding as well.

Resources