Spring Rsocket Bi-Directional Channel - spring

Is there a possibility to send Messages in a sequential way like flatMapSequential from client to Server? The default behaviour seems to be only in an almost sequential way like flatMap if I use
requester.route("name")
.data(fluxSendToServer.doOnNext(nr -> log.trace("Send next " + nr.getRequest().getCurrentSequenceValue())))
.retrieveFlux(ResponseMessageWrapper.class)
and log the sequence at the server.

Thank you for your answer. If I change the Code at the client to
requester.route("name").data(fluxSendToServer.index().map(indexedTuple->{
if(!indexedTuple.getT2().getRequest().getCurrentSequenceValue().equals(indexedTuple.getT1()+1))
{
throw new IllegalStateException("Orderingproblem in client at index "+indexedTuple.getT1());
};
return indexedTuple.getT2();
}))
and use the same code at the receiving server method
#MessageMapping("name")
Flux<EncryptedResponseMessageWrapper> handleMessageClientToServer(Flux<EncryptedRequestMessageWrapper>
fluxReceivedAtServer){
return fluxReceivedAtServer.index().map(indexedTuple-> {
if(!indexedTuple.getT2().getRequest().getCurrentSequenceValue().equals(indexedTuple.getT1()+1))
{
throw new IllegalStateException("Orderingproblem at server at index"+indexedTuple.getT1());
};
return indexedTuple.getT2();
})
For a Flux of 4000 Elements I get each time the IllegalStateException at random positions for Example Error in Route send-message-client-to-server Orderingproblem at server at index 2175
In case I use less elements it seems to repeatedly work.

Related

Updating Apache Camel JPA object in database triggers deadlock

So I have a Apache Camel route that reads Data elements from a JPA endpoint, converts them to DataConverted elements and stores them into a different database via a second JPA endpoint. Both endpoints are Oracle databases.
Now I want to set a flag on the original Data element that it got copied successfully. What is the best way to achieve that?
I tried it like that: saving the ID in the context and then reading it and accessing a dao method in the .onCompletion().onCompleteOnly().
from("jpa://Data")
.onCompletion().onCompleteOnly().process(ex -> {
var id = Long.valueOf(getContext().getGlobalOption("id"));
myDao().setFlag(id);
}).end()
.process(ex -> {
Data data = ex.getIn().getBody(Data.class);
DataConverted dataConverted = convertData(data);
ex.getMessage().setBody(data);
var globalOptions = getContext().getGlobalOptions();
globalOptions.put("id", data.getId().toString());
getContext().setGlobalOptions(globalOptions);
})
.to("jpa://DataConverted").end();
However, this seems to trigger a deadlock, the dao method is stalling on the commit of the update. The only explanation could be that the Data object gets locked by Camel and is still locked in the .onCompletion().onCompleteOnly() part of the route, therefore it can't get updated there.
Is there a better way to do it?
Have you tried using the recipient list EIP where first destination is the jpa:DataConverted endpoint and the second destination will be the endpoint to set the flag. This way both get the same message and will be executed sequentially.
https://camel.apache.org/components/3.17.x/eips/recipientList-eip.html
from("jpa://Data")
.process(ex -> {
Data data = ex.getIn().getBody(Data.class);
DataConverted dataConverted = convertData(data);
ex.getIn().setBody(data);
})
.recipientList(constant("direct:DataConverted","direct:updateFlag"))
.end();
from("direct:DataConverted")
.to("jpa://DataConverted")
.end();
from("direct:updateFlag")
.process(ex -> {
var id = ((MessageConverted) ex.getIn().getBody()).getId();
myDao().setFlag(id);
})
.end();
Keep in mind, you might want to make the route transactional by adding .transacted()
https://camel.apache.org/components/3.17.x/eips/transactional-client.html

How to extract content from Mono<List<T>> in WebFlux to pass it down the call chain

I want to be able to extract the List<Payload> from the Mono<List<Payload>> to pass it to a downstream service for processing (or maybe return from the read(RequestParams params) method, instead of it returning void):
#PostMapping("/subset")
public void read(#RequestBody RequestParams params){
Mono<List<Payload>> result = reader.read(params.getDate(), params.getAssetClasses(), params.getFirmAccounts(), params.getUserId(), params.getPassword());
....
}
where reader.read(...) is a method on an autowired Spring service utilizing a webClient to get the data from external web service API:
public Mono<List<Payload>> read(String date, String assetClasses, String firmAccounts, String id, String password) {
Flux<Payload> nodes = client
.get()
.uri(uriBuilder -> uriBuilder
.path("/api/subset")
.queryParam("payloads", true)
.queryParam("date", date)
.queryParam("assetClasses", assetClasses)
.queryParam("firmAccounts", firmAccounts)
.build())
.headers(header -> header.setBasicAuth("abc123", "XXXXXXX"))
.retrieve()
.onStatus(HttpStatus::is4xxClientError, response -> {
System.out.println("4xx error");
return Mono.error(new RuntimeException("4xx"));
})
.onStatus(HttpStatus::is5xxServerError, response -> {
System.out.println("5xx error");
return Mono.error(new RuntimeException("5xx"));
})
.bodyToFlux(Payload.class);
Mono<List<Payload>> records = nodes
.collectList();
return records;
}
Doing a blocking result.block() is not allowed in WebFlux and throws an exception:
new IllegalStateException("block()/blockFirst()/blockLast() are blocking, which is not supported in thread ..." ;
What is a proper way to extract the contents of a Mono in WebFlux?
Is it some kind of a subscribe()? What would be the syntax?
Thank you in advance.
There is no "proper way" and that is the entire point. To get the value you need to block, and blocking is bad in webflux for many reasons (that I won't go into right now).
What you should do is to return the publisher all the way out to the calling client.
One of the things that many usually have a hard time understanding is that webflux works with a producer (Mono or Flux) and a subscriber.
Your entire service is also a producer, and the calling client can be seen as the subscriber.
Think of it as a long chain, that starts at the datasource, and ends up in the client showing the data.
A simple rule of thumb is that whomever is the final consumer of the data is the subscriber, everyone else is a producer.
So in your case, you just return the Mono<List<T> out to the calling client.
#PostMapping("/subset")
public Mono<List<Payload>> read(#RequestBody RequestParams params){
Mono<List<Payload>> result = reader.read(params.getDate(), params.getAssetClasses(), params.getFirmAccounts(), params.getUserId(), params.getPassword());
return result;
}
While the following does return the value of the Mono observable in the logs:
#PostMapping("/subset")
#ResponseBody
public Mono<ResponseEntity<List<Payload>>> read1(#RequestBody RequestParams params){
Mono<List<Payload>> result = reader.read(params.getDate(), params.getAssetClasses(), params.getFirmAccounts(), params.getUserId(), params.getPassword());
return result
.map(e -> new ResponseEntity<List<PayloadByStandardBasis>>(e, HttpStatus.OK));
}
the understanding I was seeking was a proper way to compose a chain of calls, with WebFlux, whereby a response from one of its operators/legs (materialized as as a result from a webclient call, producing a set of records, as above) could be passed downstream to another operator/leg to facilitate a side effect of saving those records in a DB, or something to that effect.
It would probably be a good idea to model each of those steps as a separate REST endpoint, and then have another endpoint for a composition operation which internally calls each independent endpoint in the right order, or would other design choices be more preferred?
That is ultimately the understanding I was looking for, so if anyone wants to share an example code as well as opinions to better implement the set of steps described above, I'm willing to accept the most comprehensive answer.
Thank you.

Connection time out in jpos client

I am using jpos client (In one of the class of java Spring MVC Program) to connect the ISO8585 based server, however due to some reason server is not able to respond back, due to which my program keeps waiting for the response and results in hanging my program. So what is the proper way to implement connection timeout?
My client program look like this:
public FieldsModal sendFundTransfer(FieldsModal field){
try {
JposLogger logger = new JposLogger(ISO_LOG_LOCATION);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(ISO_PACKAGER);
ISOChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
logger.jposlogconfig(channel);
channel.connect();
log4j.info("Connection established using PostChannel");
ISOMsg m = new ISOMsg();
m.set(0, field.getMti());
//m.set(2, field.getField2());
m.set(3, field.getField3());
m.set(4, field.getField4());
m.set(11, field.getField11());
m.set(12, field.getField12());
m.set(17, field.getField17());
m.set(24, field.getField24());
m.set(32, field.getField32());
m.set(34, field.getField34());
m.set(41, field.getField41());
m.set(43, field.getField43());
m.set(46, field.getField46());
m.set(49, field.getField49());
m.set(102,field.getField102());
m.set(103,field.getField103());
m.set(123, field.getField123());
m.set(125, field.getField125());
m.set(126, field.getField126());
m.set(127, field.getField127());
m.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(r.pack()));
channel.disconnect();
}catch (Exception err) {
System.out.println("sendFundTransfer : " + err);
}
return field;
}
Well the real proper way would be to use Q2. Given you don't need a persistent connection you coud just set a timeout for the channel.
PostChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
channel.setTimeout(timeout); //timeout in millies.
This way channel will autodisconnect if nothing happens during the time specified by timeout , and your call to receive will throw an exception.
The alternative is using Q2 and a mux (see QMUX, for which you need to run Q2, or ISOMUX which is kind of deprecated).

ZeroMQ choose recipient

I'm new to ZeroMQ (and to networking in general), and have a question about using ZeroMQ in a setup where multiple clients connect to a single server. My situation is as follows:
--1 server
--multiple clients
--Clients send messages to server: I've already figured out how to do this part.
--Server sends messages to a specific client: This is the part I'm having trouble with. When certain events get handled on the server, the server will need to send a message to a specific client -- not all clients. In other words, the server will need to be able to choose which client to send a given message to.
Right now, this is my server code:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateResponseSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.Send("ack"); // There is no overload for the 'Send'
method that takes an IP address as an argument!
}
}
}
I have a feeling that the problem is that my design is wrong, and that the ResponseSocket type isn't meant to be used in the way that I want to use it. Since I'm new to this, any advice is very much appreciated!
when using the Response socket you always replying to the client that sent you the message. So the Request-Response socket types together are just simple request response.
To more complicated scenarios you probably want to use Dealer-Router.
With router the first frame of each message is the routing id (the identity of the client that sent you the message)
so your example with router will look like:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateRouterSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
byte[] routingId = server.Receive();
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.SendMore(routingId).Send("ack");
}
}
}
I also suggest to read the zeromq guide, it will probably answer most of your questions.

XHR Bandwidth reduction

So were using XHR to validate pages exists and they have content, but as we do a lot of request we wanted to trim down some of the bandwidth used.
We thought about using a HEAD request to check for !200 and then thought well that's still 2 request's if the page exists and then we come up with this sample code
Ajax.prototype.get = function (location, callback)
{
var Request = new XMLHttpRequest();
Request.open("GET", location, true);
Request.onreadystatechange = function ()
{
if(Request.readyState === Request.HEADERS_RECEIVED)
{
if(Request.status != 200)
{
//Ignore the data to save bandwidth
callback(Request);
Request.abort();
}
else
{
//#Overide the callback here to assure asingle callback fire
Request.onreadystatechange = function()
{
if (Request.readyState === Request.DONE)
{
callback(Request);
}
}
}
}
}
Request.send(null);
}
What I would like to know is does this actually work, or does the response body always come back to the client.
Thanks
I won't give a definitve answer but I have some thoughts that are to long for a comment.
Theoretically, a abortion of the request should cause the underlying connection to be closed. Assuming a TCP based communication that means sending a FIN to the server, which should then stop sending data and ACKs the FIN. But this is HTTP and there might be other magic going on (like connection pipelining, etc.)...
Anyway, when you close the connection early, the client will receive all data that was send in communication delay as the server will at least keep sending until he gets the the STOP signal. If you have a medium delay and a high bandwith connection this could be a lot of data and will, depending on the amount of data, most likely be a good portion of the complete data.
Note that, while the code will not receive any of this data, it will be transferred to the network device of the client and will be at least passed a little bit up the network stack. So, while this data never receives you application level, the bandwith will be consumed anyway.
My (educated) guess is that it will not save as much as you would like (under "normal" conditions). I would suggest that you do a real world test and see if it is worth the afford.

Resources