Blocking tcp packet receiving in Netty 4.x - proxy

How can I block netty to send ACK responese to client in netty 4.x ?
I'm trying to control TCP packet receive speed in netty in order to forward these packet to another server . Netty receive all client packets immediately ,but netty need more time send them out , so client think it finished after sending to netty .
So , I want to know how to block received packets when netty forwarding packets which are received before to another server .

Not sure to really understand your question. So I try to reformulate:
I suppose that your Netty server is acting as a Proxy between clients and another server.
I suppose that what you want to do is to send the ack back to the client only once you really send the forwarded packet to the final server (not necesseraly received by the final server, but at least send by Netty proxy).
If so, then you should use the future of the forwarded packet to respond back with the ack, such as (pseudo code):
channelOrCtxToFinalServer.writeAndFlush(packetToForward).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) {
// Perform Ack write back
ctxOfClientChannel.writeAndFlush(AckPacket);
}
});
where:
channelOrCtxToFinalServer is one of ChannelHandlerContext or Channel connected to your remote final server from your Netty proxy,
and ctxOfClientChannel is the current ChannelHandlerContext from your Netty handler that receive the packet from the client in public void channelRead(ChannelHandlerContext ctxOfClientChannel, Object packetToForward) method.
EDIT:
For the big file transfer issue, you can have a look at the Proxy example here.
In particular, pay attention on the following:
Using the same logic, pay attention on receiving data one by one from client:
yourServerBootstrap..childOption(ChannelOption.AUTO_READ, false);
// Allow to control one by one the speed of reception of client's packets
In your frontend handler:
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
}
And finally add, using the very same logic, the final ack to your client (ack depending of course on your protocol): (see here and here)
/**
* Closes the specified channel after all queued write requests are flushed.
*/
static void closeOnFlush(Channel ch) {
if (ch.isActive()) {
ch.writeAndFlush(AckPacket).addListener(ChannelFutureListener.CLOSE);
}
}

Related

Detect server disconnect in gRPC Go client

I have a gRPC service simmilar to below
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
I need the client to maintain a long living gRPC connection to the server so that if the server goes down, the client can reconnect and issue SayHello() again.
Based on my understanding there are a few options:
Pass in a statsHandler to grpc.Dial and add retry logic in HandleConn()
Add a new ClientStreaming API that maybe sends a message every few seconds. Check for server side stream close errors and implement retry logic.
Not sure if there is a recommended way for my use case and would appreciate any help.

How Do I Connect Stomp Client to An ActiveMQ Artemis Destination Created Using JMS(Spring Boot)?

CONTEXT
I am trying to learn about SpringJMS and MOMS and I am using ActiveMQ Artemis for this. I created a Queue Destination address using the jakarta.jms.* API, and managed to send some message to the queue like this:
public void createUserDestination(String userId) throws JMSException {
queueDestination = setupConnection().createQueue("user" + userId);
producer = session.createProducer(queueDestination);
producer.setDeliveryMode(DeliveryMode.PERSISTENT);
producer.send(session.createTextMessage("Testing queue availability"));
connection.close();
log.info("successfully created group... going back to controller");
}
So for example, if I pass an ID of user12345abc, I get a Queue Address user12345abc, of the routing type ANYCAST with one queue underneath(with that same address) with my message placed there.
PROBLEM
Now, I wanted to write a simple web front-end with STOMP that can connect to this queue. But I have been having a ton of problems connecting to that queue address because each time I try to connect by providing the destination address, it creates a new address in the MOM and connects to that instead.
My STOMP code looks like this(the first argument is the destination address, you can ignore the rest of the code):
stompClient.subscribe("jms.queue.user12345abc", (message) => {
receivedMessages.value.push(message.body);
});
In this case, completely brand new queue is created with the address jms.queue.user12345abc which is not what I want at all.
I configured my Spring Backend to use an external MOM broker like this(I know this is important):
public void configureMessageBroker(MessageBrokerRegistry registry) {
// these two end points are prefixes for where the messages are pushed to
registry.enableStompBrokerRelay("jms.topic", "jms.queue")
.setRelayHost("127.0.0.1")
.setRelayPort(61613)
.setSystemLogin(brokerUsername)
.setSystemPasscode(brokerPassword)
.setClientLogin(brokerUsername)
.setClientPasscode(brokerPassword);
// this prefixes the end points where clients send messages
registry.setApplicationDestinationPrefixes("/app", "jms.topic", "jms.queue");
// this prefixes the end points where the user's subscribe to
registry.setUserDestinationPrefix("/user");
}
But it's still not working as I expect it to. Am I getting some concept wrong here? How do I use STOMP to connect to that queue I created earlier with JMS?
It's not clear why you are using the jms.queue and jms.topic prefixes. Those are similar but not quite the same as the jms.queue. and jms.topic. prefixes which were used way back in ActiveMQ Artemis 1.x (whose last release was in early 2018 almost 5 years ago now).
In any case, I recommend you use the more widely adopted /queue/ and /topic/, e.g.:
public void configureMessageBroker(MessageBrokerRegistry registry) {
// these two end points are prefixes for where the messages are pushed to
registry.enableStompBrokerRelay("/topic/", "/queue/")
.setRelayHost("127.0.0.1")
.setRelayPort(61613)
.setSystemLogin(brokerUsername)
.setSystemPasscode(brokerPassword)
.setClientLogin(brokerUsername)
.setClientPasscode(brokerPassword);
// this prefixes the end points where clients send messages
registry.setApplicationDestinationPrefixes("/app", "/topic/", "/queue/");
// this prefixes the end points where the user's subscribe to
registry.setUserDestinationPrefix("/user");
}
The in broker.xml you'd need to add the corresponding anycastPrefix and multicastPrefix values on the STOMP acceptor, e.g.:
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;anycastPrefix=/queue/;multicastPrefix=/topic/</acceptor>
To be clear, your JMS code will stay the same, but your STOMP consumer would be something like:
stompClient.subscribe("/queue/user12345abc", (message) => {
receivedMessages.value.push(message.body);
});

I want to send numerous udp messages by udp client,how could I achieve it?

I want to send numerous udp messages by udp client at once ,but the demo only send one message.how could i achieve it?
With the demo code,I can only send a finite number of messages.I want to use a while(true) to send message ,how could i achieve it?
public static void main(String[] args) {
Connection connection =
UdpClient.create()
.host("localhost")
.port(8080)
.handle((udpInbound, udpOutbound) -> {
return udpOutbound.sendString(Mono.just("end")).sendString(Mono.just("end1")).sendString(Mono.just("end2"));
})
.connectNow(Duration.ofSeconds(30));
connection.onDispose()
.block();
}
You can use Flux instead of Mono when you want to send more than one message.
One sendString(Flux)invocation is better in comparison with the approach with many sendString(Mono) invocations.
The example below uses Flux.interval so that you have infinite stream that emits messages every 100ms. Also when you have an infinite stream you have to switch to flush on each strategy
Connection connection =
UdpClient.create()
.host("localhost")
.port(8080)
.handle((udpInbound, udpOutbound) ->
udpOutbound.options(NettyPipeline.SendOptions::flushOnEach)
.sendString(Flux.interval(Duration.ofMillis(100))
.map(l -> l + "")))
.connectNow(Duration.ofSeconds(30));

Using zmq_poll and zmq_send() on a same socket

I'm confused by a warning in the api of zmq_poll: "The zmq_send() function will clear all pending events on a socket. Thus, if you use zmq_poll() to monitor input on a socket, use it before output as well, and process all events after each zmq_poll() call."
I don't understand what that means. Since events are level-triggered. If I call zmq_send() and then zmq_poll(), any pending messages in the socket's buffer should trigger the zmq_poll again immediately. Why one needs to "use it (zmq_poll) before output as well" or "process all events after each zmq_poll() call"?
I see your point, the documentation is confusing. Here's a simple test in Java using a client-side DEALER socket with a poller (from asyncsrv) . The server sends 3 messages to the client. The client polls and outputs each message it receives. I've added send() in the client to test your theory. Assuming send() clears the poller, we expect the client to output receipt of only a single message:
Server
public static void main(String[] args) {
Context context = ZMQ.context(1);
ZMQ.Socket server = context.socket(ZMQ.ROUTER);
server.bind("tcp://*:5555");
server.sendMore("clientId");
server.send("msg1");
server.sendMore("clientId");
server.send("msg2");
server.sendMore("clientId");
server.send("msg3");
}
Client
public void run() {
socket = context.socket(ZMQ.DEALER);
socket.setIdentity("clientId".getBytes());
socket.connect("tcp://localhost:5555");
ZMQ.Poller poller = new ZMQ.Poller(1);
poller.register(socket, ZMQ.Poller.POLLIN);
while (true) {
poller.poll();
if (poller.pollin(0)) {
String msg = socket.recvStr(0);
System.out.println("Client got msg: " + msg);
socket.send("whatever", 0);
}
}
}
outputs...
Client got msg: msg1
Client got msg: msg2
Client got msg: msg3
Based on the results, doing send() does not clear the poller for socket, and it should be obvious why. We configured the poller with POLLIN, meaning the poller listens for inbound messages to socket. When doing socket.send(), it creates outbound messages, on which the poller is not listening.
Hope it helps...

Server in Apache Mina

I found some code on this link http://www.techbrainwave.com/?p=912 which describes how to set up a client server architecture using apache mina. However, in the example provided it is only one-way communication (from client to server). Does anyone know how to modify this in order to obtain two-way communication?
If you want the server to reply to the client message, you need to do it in the IoHandler of the server :
#Override
public void messageReceived(IoSession session, Object message)
{
logger.info("Message received in the server..");
logger.info("Message is: " + message.toString());
// reply to the client
session.write( /*the reply message here */);
}

Resources