How can I detect a new incoming connection with Netty 4.0.x?
Here is a code of a simple server:
public static void main(String[] args) {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
System.out.println("Init channel");
ch.pipeline().addLast(new ChannelInboundHandlerAdapter() {
#Override
public void channelRegistered(ChannelHandlerContext ctx) {
System.out.println("Channel registered");
}
#Override
public void channelActive(final ChannelHandlerContext ctx) {
System.out.println("Channel active");
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
System.out.println("Channel read");
ctx.writeAndFlush((ByteBuf) msg);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
System.out.println("Channel exception caught");
cause.printStackTrace();
ctx.close();
}
});
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture f;
try {
f = b.bind("127.0.0.1", 8080).sync();
f.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
}
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
The problem is that I can not detect a moment when a connection is established. All System.out.printlns are called only after the client sent some information to the server. And this means that the server can not send any data to the client before the client sent some data to the server.
What I expect:
server started
client started
client connected
{{ Init channel }}
{{ Channel registered }}
{{ Channel active }}
// Possibly send some data from server to client
Client send some data
{{ Channel read }}
What I get:
server started
client started
client connected
// Unable to send any data to client
Client send some data
{{ Init channel }}
{{ Channel registered }}
{{ Channel active }}
{{ Channel read }}
Are there any ways to achieve such behavior?
You can send some data when the client connect to server, so you can use the ctx.writeAndFlush(msg) in channelRegistered or channelActive method.
If your client does not receive any data, you should to ensure the type of Encode is right.
It looks like the port number 8080 caused somehow this problem. When I changed the port number to 12345 the program began to work as expected.
Related
I have this code in RpiAlertResource.java
import io.socket.client.IO;
import io.socket.client.Socket;
import io.socket.emitter.Emitter;
...
#PostMapping("/nms-rpi-alertsMany")
public String createNmsRpiAlertMany(#RequestBody NmsRpiAlert rpiAlert) throws URISyntaxException {
....
Socket socket;
try {
socket = IO.socket("https://xxx.xxx.xx");
socket.on(Socket.EVENT_CONNECT, new Emitter.Listener() {
#Override
public void call(Object... args) {
socket.emit("InsertAlert", alert.getId(), alert.getSentToZones() );
socket.disconnect();
}
}).on(Socket.EVENT_DISCONNECT, new Emitter.Listener() {
#Override
public void call(Object... args) {
}
});
socket.connect();
} catch (URISyntaxException e) {
e.printStackTrace();
}
So it opens and closes a socket connection every time the api is called.
Is working ok. But like this api is called a lot. Like twice a minute I was wondering if there is a better way to implement this connection?
Like a 'global' socket connection, that could reconnect again when it losses connection?
Where in a Springboot application do I set this connection?
I created simple client and server. Client sends rpc requests:
RabbitTemplate template.convertSendAndReceive(...) ;
Server receive it, and answers back:
#RabbitListener(queues = "#{queue.getName()}")
public Object handler(#Payload String key)...
Then I make client send rpc requests asynchronously, simultaneously(which produces lot of concurrent rpc requests).
And unexpectedly receive an error:
org.springframework.amqp.AmqpResourceNotAvailableException: The channelMax limit is reached. Try later.
at org.springframework.amqp.rabbit.connection.SimpleConnection.createChannel(SimpleConnection.java:59)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createBareChannel(CachingConnectionFactory.java:1208)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.access$200(CachingConnectionFactory.java:1196)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.doCreateBareChannel(CachingConnectionFactory.java:599)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createBareChannel(CachingConnectionFactory.java:582)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getCachedChannelProxy(CachingConnectionFactory.java:552)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getChannel(CachingConnectionFactory.java:534)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.access$1400(CachingConnectionFactory.java:99)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createChannel
Rabbitmq client seems create too many channels. How to fix it?
And why my client create them so many?
Channels are cached so there should only be as many channels as there are actual RPC calls in process.
You may need to increase the channel max setting on the broker.
EDIT
If your RPC calls are long-lived, you can reduce the time the channel is used by using the AsyncRabbitTemplate with an explicit reply queue, and avoid using the direct reply-to feature.
See the documentation.
EDIT2
Here is an example using the AsyncRabbitTemplate; it sends 1000 messages on 100 threads (and the consumer has 100 threads).
The total number of channels used was 107 - 100 for the consumers and only 7 were used for sending.
#SpringBootApplication
public class So56126654Application {
public static void main(String[] args) {
SpringApplication.run(So56126654Application.class, args);
}
#RabbitListener(queues = "so56126654", concurrency = "100")
public String slowService(String in) throws InterruptedException {
Thread.sleep(5_000L);
return in.toUpperCase();
}
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
ExecutorService exec = Executors.newFixedThreadPool(100);
return args -> {
System.out.println(asyncTemplate.convertSendAndReceive("foo").get());
for (int i = 0; i < 1000; i++) {
int n = i;
exec.execute(() -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo" + n);
try {
System.out.println(future.get(10, TimeUnit.SECONDS));
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
e.printStackTrace();
}
catch (ExecutionException e) {
e.printStackTrace();
}
catch (TimeoutException e) {
e.printStackTrace();
}
});
}
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(ConnectionFactory connectionFactory) {
return new AsyncRabbitTemplate(connectionFactory, "", "so56126654", "so56126654-replies");
}
#Bean
public Queue queue() {
return new Queue("so56126654");
}
#Bean
public Queue reeplyQueue() {
return new Queue("so56126654-replies");
}
}
I am successfully connecting to a local websocket server with tyrus, but the onMessage method does not get called. I setup Fiddler as proxy in between and I see that the server responds with two messages, however, they are not printed out in my code. I more or less adapted the sampe code:
The onOpen Message is printed out
public static void createAndConnect(String channel) {
CountDownLatch messageLatch;
try {
messageLatch = new CountDownLatch(1);
final ClientEndpointConfig cec = ClientEndpointConfig.Builder.create().build();
ClientManager client = ClientManager.createClient();
client.connectToServer(new Endpoint() {
#Override
public void onOpen(Session session, EndpointConfig config) {
System.out.println("On Open and is Open " + session.isOpen());
session.addMessageHandler((Whole<String>) message -> {
System.out.println("Received message: " + message);
messageLatch.countDown();
});
}
}, cec, new URI("ws://192.168.1.248/socket.io/1/websocket/" + channel));
messageLatch.await(5, TimeUnit.SECONDS); //I also tried increasing timeout to 30sec, doesn't help
} catch (Exception e) {
e.printStackTrace();
}
}
That's a known issue - it will work if you rewrite lambda to anonymous class or use Session#addMessageHandler(Class, MessageHandler) (you can use lambdas here).
I am using httpcomponenets nio server to handle post request.
Below is the sample code. It gets the complete data in byte array using EntityUtils.toByteArray(). This fails if the requester sends a large file.
I couldnt figure out how to read the data in the request in chunks.
HttpEntity.getContent().read() always returns null
public static void main(String[] args) throws Exception {
int port = 8280;
// Create HTTP protocol processing chain
HttpProcessor httpproc = HttpProcessorBuilder.create()
.add(new ResponseDate())
.add(new ResponseServer("Test/1.1"))
.add(new ResponseContent())
.add(new ResponseConnControl()).build();
// Create request handler registry
UriHttpAsyncRequestHandlerMapper reqistry = new UriHttpAsyncRequestHandlerMapper();
// Register the default handler for all URIs
reqistry.register("/test*", new RequestHandler());
// Create server-side HTTP protocol handler
HttpAsyncService protocolHandler = new HttpAsyncService(httpproc, reqistry) {
#Override
public void connected(final NHttpServerConnection conn) {
System.out.println(conn + ": connection open");
super.connected(conn);
}
#Override
public void closed(final NHttpServerConnection conn) {
System.out.println(conn + ": connection closed");
super.closed(conn);
}
};
// Create HTTP connection factory
NHttpConnectionFactory<DefaultNHttpServerConnection> connFactory;
connFactory = new DefaultNHttpServerConnectionFactory(
ConnectionConfig.DEFAULT);
// Create server-side I/O event dispatch
IOEventDispatch ioEventDispatch = new DefaultHttpServerIODispatch(protocolHandler, connFactory);
// Set I/O reactor defaults
IOReactorConfig config = IOReactorConfig.custom()
.setIoThreadCount(1)
.setSoTimeout(3000)
.setConnectTimeout(3000)
.build();
// Create server-side I/O reactor
ListeningIOReactor ioReactor = new DefaultListeningIOReactor(config);
try {
// Listen of the given port
ioReactor.listen(new InetSocketAddress(port));
// Ready to go!
ioReactor.execute(ioEventDispatch);
} catch (InterruptedIOException ex) {
System.err.println("Interrupted");
} catch (IOException e) {
System.err.println("I/O error: " + e.getMessage());
}
System.out.println("Shutdown");
}
public static class RequestHandler implements HttpAsyncRequestHandler<HttpRequest> {
public void handleInternal(HttpRequest httpRequest, HttpResponse httpResponse, HttpContext httpContext) throws HttpException, IOException {
HttpEntity entity = null;
if (httpRequest instanceof HttpEntityEnclosingRequest)
entity = ((HttpEntityEnclosingRequest)httpRequest).getEntity();
byte[] data;
if (entity == null) {
data = new byte [0];
} else {
data = EntityUtils.toByteArray(entity);
}
System.out.println(new String(data));
httpResponse.setEntity(new StringEntity("success response"));
}
#Override public HttpAsyncRequestConsumer<HttpRequest> processRequest(HttpRequest request, HttpContext context) throws HttpException, IOException {
return new BasicAsyncRequestConsumer();
}
#Override
public void handle(HttpRequest request, HttpAsyncExchange httpExchange, HttpContext context) throws HttpException, IOException {
HttpResponse response = httpExchange.getResponse();
handleInternal(request, response, context);
httpExchange.submitResponse(new BasicAsyncResponseProducer(response));
}
}
Please consider implementing a custom AbstractAsyncRequestConsumer instead of BasicAsyncRequestConsumer if you want to have full control over request processing.
You might use these classes as a starting point [1][2]. Please note these are response consumers though one can use the same approach to create custom request consumers:
[1] http://hc.apache.org/httpcomponents-asyncclient-4.1.x/httpasyncclient/xref/org/apache/http/nio/client/methods/AsyncCharConsumer.html
[2] http://hc.apache.org/httpcomponents-asyncclient-4.1.x/httpasyncclient/xref/org/apache/http/nio/client/methods/AsyncByteConsumer.html
I had implemented ISO SERVER by using ASCII channel and ASCII packager and listening on a port and giving response to ISO requests.
how can i make my server that accepts concurrent requests and send the response.
Please
if you are using Q2, just deploy QServer and set the minSessions and maxSessions which its default value is 0 and 100.
here example jPOS server that handle concurent request:
http://didikhari.web.id/java/jpos-client-receive-response-specific-port/
ISOServer works with a threadpool, so you can accept concurrent requests out of the box. Every socket connection is handled by its own thread. So, I think all you have to do is assign a ISORequestListener to your ISOServer to actually process your incoming messages.
Here's a test program taken from the jPOS guide:
public class Test implements ISORequestListener {
public Test () {
super();
}
public boolean process (ISOSource source, ISOMsg m) {
try {
m.setResponseMTI ();
m.set (39, "00");
source.send (m);
} catch (ISOException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return true;
}
public static void main (String[] args) throws Exception {
Logger logger = new Logger ();
logger.addListener (new SimpleLogListener (System.out));
ServerChannel channel = new XMLChannel (new XMLPackager());
((LogSource)channel).setLogger (logger, "channel");
ISOServer server = new ISOServer (8000, channel, null);
server.setLogger (logger, "server");
server.addISORequestListener (new Test ());
new Thread (server).start ();
}
}