I want to send numerous udp messages by udp client,how could I achieve it? - reactor-netty

I want to send numerous udp messages by udp client at once ,but the demo only send one message.how could i achieve it?
With the demo code,I can only send a finite number of messages.I want to use a while(true) to send message ,how could i achieve it?
public static void main(String[] args) {
Connection connection =
UdpClient.create()
.host("localhost")
.port(8080)
.handle((udpInbound, udpOutbound) -> {
return udpOutbound.sendString(Mono.just("end")).sendString(Mono.just("end1")).sendString(Mono.just("end2"));
})
.connectNow(Duration.ofSeconds(30));
connection.onDispose()
.block();
}

You can use Flux instead of Mono when you want to send more than one message.
One sendString(Flux)invocation is better in comparison with the approach with many sendString(Mono) invocations.
The example below uses Flux.interval so that you have infinite stream that emits messages every 100ms. Also when you have an infinite stream you have to switch to flush on each strategy
Connection connection =
UdpClient.create()
.host("localhost")
.port(8080)
.handle((udpInbound, udpOutbound) ->
udpOutbound.options(NettyPipeline.SendOptions::flushOnEach)
.sendString(Flux.interval(Duration.ofMillis(100))
.map(l -> l + "")))
.connectNow(Duration.ofSeconds(30));

Related

Spring Integration - Concurrent access to SFTP outbound gateway GET w/ STREAM and accessing the response from Queue Channel

Context
Per the spring docs https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#using-the-get-command, the GET command on the SFTP outbound gateway with STREAM option would return the input stream corresponding to the file passed in the input channel.
We could configure an integration flow similar to the recommendation at
https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#configuring-with-the-java-dsl-3
#Bean
public QueueChannelSpec remoteFileOutputChannel() {
return MessageChannels.queue();
}
#Bean
public IntegrationFlow sftpGetFlow() {
return IntegrationFlows.from("sftpGetInputChannel")
.handle(Sftp.outboundGateway(sftpSessionFactory(),
AbstractRemoteFileOutboundGateway.Command.GET, "payload")
.options(AbstractRemoteFileOutboundGateway.Option.STREAM))
.channel("remoteFileOutputChannel")
.get();
}
I plan to obtain the input stream from the caller similar to the response provided in the edits in the question here No Messages When Obtaining Input Stream from SFTP Outbound Gateway
public InputStream openFileStream(final int retryCount, final String filename, final String directory)
throws Exception {
InputStream is = null;
for (int i = 1; i <= retryCount; ++i) {
if (sftpGetInputChannel.send(MessageBuilder.withPayload(directory + "/" + filename).build(), ftpTimeout)) {
is = getInputStream();
if (is != null) {
break;
} else {
logger.info("Failed to obtain input stream so attempting retry " + i + " of " + retryCount);
Thread.sleep(ftpTimeout);
}
}
}
return is;
}
private InputStream getInputStream() {
Message<?> msgs = stream.receive(ftpTimeout);
if (msgs == null) {
return null;
}
InputStream is = (InputStream) msgs.getPayload();
return is;
}
I would like to pass the input stream to the item reader that is part of a Spring Batch job. The job would read from the input stream and close the stream/session upon completion.
Question
The response from the SFTP outbound gateway is sent to a queue channel. If there are concurrent GET requests to the gateway from multiple jobs/clients, how does the consumer pick the appropriate input stream from the blocking queue in the queue channel? The solution I could think of
Mark getInputStream as synchronized. This would ensure that only one consumer can send commands to the outbound gateway. Since all we are doing is returning a reference to the input stream, it is not a huge performance bottleneck. We could also set the capacity of the queue channel as an additional measure.
This is not an ideal solution because it is very much possible for other devs to bypass the synchronized method here and interact with the outbound gateway. We run the risk of fetching an incorrect stream.
The underlying SFTP client implementation used by Spring doesn't impose any such restrictions so I am seeking a Spring integration solution that can overcome this problem.
Does the GET with STREAM return any headers with the input file name from the payload that can be used by the client to make sure that the stream corresponds to the requested file? This would require peeking + inspection in to the queue before popping a message out of the queue. Not ideal, I think.
Is there a way to pass the response queue channel name as a parameter from the caller?
Appreciate any insights.
Yes, simply set the replyChannel header with a new QueueChannel for each request and terminate the flow with the gateway; if there is no output channel, the ob gateway sends the reply to the header channel.
That is similar to how inbound gateways work.

Connection time out in jpos client

I am using jpos client (In one of the class of java Spring MVC Program) to connect the ISO8585 based server, however due to some reason server is not able to respond back, due to which my program keeps waiting for the response and results in hanging my program. So what is the proper way to implement connection timeout?
My client program look like this:
public FieldsModal sendFundTransfer(FieldsModal field){
try {
JposLogger logger = new JposLogger(ISO_LOG_LOCATION);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(ISO_PACKAGER);
ISOChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
logger.jposlogconfig(channel);
channel.connect();
log4j.info("Connection established using PostChannel");
ISOMsg m = new ISOMsg();
m.set(0, field.getMti());
//m.set(2, field.getField2());
m.set(3, field.getField3());
m.set(4, field.getField4());
m.set(11, field.getField11());
m.set(12, field.getField12());
m.set(17, field.getField17());
m.set(24, field.getField24());
m.set(32, field.getField32());
m.set(34, field.getField34());
m.set(41, field.getField41());
m.set(43, field.getField43());
m.set(46, field.getField46());
m.set(49, field.getField49());
m.set(102,field.getField102());
m.set(103,field.getField103());
m.set(123, field.getField123());
m.set(125, field.getField125());
m.set(126, field.getField126());
m.set(127, field.getField127());
m.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(r.pack()));
channel.disconnect();
}catch (Exception err) {
System.out.println("sendFundTransfer : " + err);
}
return field;
}
Well the real proper way would be to use Q2. Given you don't need a persistent connection you coud just set a timeout for the channel.
PostChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
channel.setTimeout(timeout); //timeout in millies.
This way channel will autodisconnect if nothing happens during the time specified by timeout , and your call to receive will throw an exception.
The alternative is using Q2 and a mux (see QMUX, for which you need to run Q2, or ISOMUX which is kind of deprecated).

Blocking tcp packet receiving in Netty 4.x

How can I block netty to send ACK responese to client in netty 4.x ?
I'm trying to control TCP packet receive speed in netty in order to forward these packet to another server . Netty receive all client packets immediately ,but netty need more time send them out , so client think it finished after sending to netty .
So , I want to know how to block received packets when netty forwarding packets which are received before to another server .
Not sure to really understand your question. So I try to reformulate:
I suppose that your Netty server is acting as a Proxy between clients and another server.
I suppose that what you want to do is to send the ack back to the client only once you really send the forwarded packet to the final server (not necesseraly received by the final server, but at least send by Netty proxy).
If so, then you should use the future of the forwarded packet to respond back with the ack, such as (pseudo code):
channelOrCtxToFinalServer.writeAndFlush(packetToForward).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) {
// Perform Ack write back
ctxOfClientChannel.writeAndFlush(AckPacket);
}
});
where:
channelOrCtxToFinalServer is one of ChannelHandlerContext or Channel connected to your remote final server from your Netty proxy,
and ctxOfClientChannel is the current ChannelHandlerContext from your Netty handler that receive the packet from the client in public void channelRead(ChannelHandlerContext ctxOfClientChannel, Object packetToForward) method.
EDIT:
For the big file transfer issue, you can have a look at the Proxy example here.
In particular, pay attention on the following:
Using the same logic, pay attention on receiving data one by one from client:
yourServerBootstrap..childOption(ChannelOption.AUTO_READ, false);
// Allow to control one by one the speed of reception of client's packets
In your frontend handler:
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
}
And finally add, using the very same logic, the final ack to your client (ack depending of course on your protocol): (see here and here)
/**
* Closes the specified channel after all queued write requests are flushed.
*/
static void closeOnFlush(Channel ch) {
if (ch.isActive()) {
ch.writeAndFlush(AckPacket).addListener(ChannelFutureListener.CLOSE);
}
}

ZeroMQ choose recipient

I'm new to ZeroMQ (and to networking in general), and have a question about using ZeroMQ in a setup where multiple clients connect to a single server. My situation is as follows:
--1 server
--multiple clients
--Clients send messages to server: I've already figured out how to do this part.
--Server sends messages to a specific client: This is the part I'm having trouble with. When certain events get handled on the server, the server will need to send a message to a specific client -- not all clients. In other words, the server will need to be able to choose which client to send a given message to.
Right now, this is my server code:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateResponseSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.Send("ack"); // There is no overload for the 'Send'
method that takes an IP address as an argument!
}
}
}
I have a feeling that the problem is that my design is wrong, and that the ResponseSocket type isn't meant to be used in the way that I want to use it. Since I'm new to this, any advice is very much appreciated!
when using the Response socket you always replying to the client that sent you the message. So the Request-Response socket types together are just simple request response.
To more complicated scenarios you probably want to use Dealer-Router.
With router the first frame of each message is the routing id (the identity of the client that sent you the message)
so your example with router will look like:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateRouterSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
byte[] routingId = server.Receive();
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.SendMore(routingId).Send("ack");
}
}
}
I also suggest to read the zeromq guide, it will probably answer most of your questions.

Using zmq_poll and zmq_send() on a same socket

I'm confused by a warning in the api of zmq_poll: "The zmq_send() function will clear all pending events on a socket. Thus, if you use zmq_poll() to monitor input on a socket, use it before output as well, and process all events after each zmq_poll() call."
I don't understand what that means. Since events are level-triggered. If I call zmq_send() and then zmq_poll(), any pending messages in the socket's buffer should trigger the zmq_poll again immediately. Why one needs to "use it (zmq_poll) before output as well" or "process all events after each zmq_poll() call"?
I see your point, the documentation is confusing. Here's a simple test in Java using a client-side DEALER socket with a poller (from asyncsrv) . The server sends 3 messages to the client. The client polls and outputs each message it receives. I've added send() in the client to test your theory. Assuming send() clears the poller, we expect the client to output receipt of only a single message:
Server
public static void main(String[] args) {
Context context = ZMQ.context(1);
ZMQ.Socket server = context.socket(ZMQ.ROUTER);
server.bind("tcp://*:5555");
server.sendMore("clientId");
server.send("msg1");
server.sendMore("clientId");
server.send("msg2");
server.sendMore("clientId");
server.send("msg3");
}
Client
public void run() {
socket = context.socket(ZMQ.DEALER);
socket.setIdentity("clientId".getBytes());
socket.connect("tcp://localhost:5555");
ZMQ.Poller poller = new ZMQ.Poller(1);
poller.register(socket, ZMQ.Poller.POLLIN);
while (true) {
poller.poll();
if (poller.pollin(0)) {
String msg = socket.recvStr(0);
System.out.println("Client got msg: " + msg);
socket.send("whatever", 0);
}
}
}
outputs...
Client got msg: msg1
Client got msg: msg2
Client got msg: msg3
Based on the results, doing send() does not clear the poller for socket, and it should be obvious why. We configured the poller with POLLIN, meaning the poller listens for inbound messages to socket. When doing socket.send(), it creates outbound messages, on which the poller is not listening.
Hope it helps...

Resources