Netty application optimization - performance

I'm writing a local HTTP server based on Netty. When I make a stress test, I'm limited on 400 requests/second.
To optimize my server, I've written a simple server based on Netty, that just sends "Hello World" to the client, and I launched a stress test with Gatling 2, and with this server, I've got the same result (limited to 400 req/s).
I use Yourkit for profiling, there is no extra GC activity, and my open/closed sockets are limited to 480 sockets/s.
I work with a MacBook Pro, with 4 cores, 16 GB of RAM, and I use Netty 4.1.
I'm surprised to be limited at 400 req/s, because the result of other benchmark tests show >20 000 req/s, or more. I understand that there are hardware limits, but 400 req/s, for a sending "hello World" on a 4 cores + 16 GB of Ram is very low.
Thank you in advance for your help, I don't know where to begin to optimize my Netty code.
Are there any concrete guidelines of optimizing Netty?
Here the source code of my hello world server, followed by the handler of my connections:
public class TestServer {
private static final Logger logger = LogManager.getLogger("TestServer");
int nbSockets = 0 ;
EventLoopGroup pool = new NioEventLoopGroup() ;
private void init(int port) {
EventLoopGroup bossGroup = new NioEventLoopGroup(100) ;
try {
long t1 = System.currentTimeMillis() ;
ServerBootstrap b = new ServerBootstrap().group(bossGroup);
b.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("decoder", new HttpRequestDecoder(8192, 8192 * 2,
8192 * 2));
ch.pipeline().addLast("encoder", new HttpResponseEncoder());
ch.pipeline().addLast(new TestServerHandler(TestServer.this));
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
System.err.println("Error");
super.exceptionCaught(ctx,cause);
}
})
.option(ChannelOption.SO_BACKLOG, 100000)
.option(ChannelOption.SO_KEEPALIVE,false)
.option(ChannelOption.TCP_NODELAY,false)
.option(ChannelOption.SO_REUSEADDR,true)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS,10000)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2);;
scheduler.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
System.err.println(nbSockets);
nbSockets = 0 ;
}
},1, 1,TimeUnit.SECONDS) ;
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(port).sync();
f.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
bossGroup.shutdownGracefully();
System.err.println("Coucou");
}
}
public static void main(String[] args) {
TestServer testServer = new TestServer() ;
testServer.init(8888);
}
}
and here is the source code of my handler:
public class TestServerHandler extends ChannelInboundHandlerAdapter {
private final TestServer testServer;
public TestServerHandler(TestServer testServer) {
this.testServer = testServer ;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
try {
process(ctx, msg);
} catch (Throwable e) {
e.printStackTrace();
}
}
public void process(ChannelHandlerContext ctx, Object msg) throws Exception {
ctx.channel().writeAndFlush(buildHttpResponse()).addListener(new GenericFutureListener<Future<? super Void>>() {
#Override
public void operationComplete(Future<? super Void> future) throws Exception {
ctx.channel().close() ;
testServer.nbSockets ++ ;
}
}) ;
}
public DefaultFullHttpResponse buildHttpResponse() {
String body = "hello world" ;
byte[] bytes = body.getBytes(Charset.forName("UTF-8"));
ByteBuf byteContent = Unpooled.copiedBuffer(bytes);
HttpResponseStatus httpResponseStatus =HttpResponseStatus.OK;
DefaultFullHttpResponse httpResponse = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
httpResponseStatus, byteContent);
return httpResponse;
}
}

You've disabled keep-alive and are closing the connection on every request, so I suspect you spend most of your time opening and closing HTTP connections.
because the result of other benchmark tests show >20 000 req/s, or more
Which other benchmarks are you referring to? There's a very good chance they were both pooling connections, and using HTTP pipelining, hence a very different usage from yours.
Back to your original question (how to optimize Netty), there's two kind of things you could do:
Micro-optimize allocations: using pooled ByteBuffers, or even better computing them only once
Switch to native epoll transport (Linux only)
But all those improvement won't probably amount to much compare to connection handling.

Related

redis timeout increasing set min threads and connection timeout does not resolve

Redis timeout errors occurring in my application when processing thousands of messages
I have an application which takes in contracts, creates a key and value in redis for each contract before it's being sent to Kafka. If I get the same contract, application checks if the key exists and the value is same to the previous contract, the messages is not processed further.
I have tried increasing the connectTimeout and connectRetry in the connection string. Have also tried setting the minthreads in the program main method.
The outer application which calls Redis within is configured by setting min threads and the connection sting with increased connectTimeout and connectRety
public static void Main(string[] args)
{
System.Threading.ThreadPool.SetMinThreads(1000, 300);
CreateWebHostBuilder(args).Build().Run();
}
"Redis": {
"ConnectionString": "localhost:6379",connectTimeout=10000,connectRetry=5
}
public class RedisConnectionFactory
{
public static ConnectionMultiplexer GetConnection(string connectionString)
{
var options = ConfigurationOptions.Parse(connectionString);
return ConnectionMultiplexer.Connect(options);
}
}
public interface IRedisDataAgent
{
string GetStringValue(string key);
void SetStringValue(string key, string value);
void DeleteStringValue(string key);
}
public class RedisDataAgent : IRedisDataAgent
{
private static IDatabase _database;
public RedisDataAgent(IConfiguration configuration)
{
var connection = RedisConnectionFactory.GetConnection(configuration.GetSection("Redis")["ConnectionString"]);
_database = connection.GetDatabase();
}
public string GetStringValue(string key)
{
return _database.StringGet(key);
}
public void SetStringValue(string key, string value)
{
_database.StringSet(key, value);
}
public void DeleteStringValue(string key)
{
_database.KeyDelete(key);
}
}
Application is expected not to throw any timeout errors no matter how many thousands of messages being processed.
I have increased the min threads to 1000, but still throwing errors. No matter how much I increase, it's still throwing errors
An unhandled exception was thrown by the application.
StackExchange.Redis.RedisTimeoutException: Timeout performing SET (5000ms),
IOCP: (Busy=0,Free=1000,Min=300,Max=1000), WORKER: (Busy=1018,Free=31749,Min=1000,Max=32767)

How to fix weird "channelMax" error rabbitmq rpc?

I created simple client and server. Client sends rpc requests:
RabbitTemplate template.convertSendAndReceive(...) ;
Server receive it, and answers back:
#RabbitListener(queues = "#{queue.getName()}")
public Object handler(#Payload String key)...
Then I make client send rpc requests asynchronously, simultaneously(which produces lot of concurrent rpc requests).
And unexpectedly receive an error:
org.springframework.amqp.AmqpResourceNotAvailableException: The channelMax limit is reached. Try later.
at org.springframework.amqp.rabbit.connection.SimpleConnection.createChannel(SimpleConnection.java:59)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createBareChannel(CachingConnectionFactory.java:1208)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.access$200(CachingConnectionFactory.java:1196)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.doCreateBareChannel(CachingConnectionFactory.java:599)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createBareChannel(CachingConnectionFactory.java:582)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getCachedChannelProxy(CachingConnectionFactory.java:552)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getChannel(CachingConnectionFactory.java:534)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.access$1400(CachingConnectionFactory.java:99)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createChannel
Rabbitmq client seems create too many channels. How to fix it?
And why my client create them so many?
Channels are cached so there should only be as many channels as there are actual RPC calls in process.
You may need to increase the channel max setting on the broker.
EDIT
If your RPC calls are long-lived, you can reduce the time the channel is used by using the AsyncRabbitTemplate with an explicit reply queue, and avoid using the direct reply-to feature.
See the documentation.
EDIT2
Here is an example using the AsyncRabbitTemplate; it sends 1000 messages on 100 threads (and the consumer has 100 threads).
The total number of channels used was 107 - 100 for the consumers and only 7 were used for sending.
#SpringBootApplication
public class So56126654Application {
public static void main(String[] args) {
SpringApplication.run(So56126654Application.class, args);
}
#RabbitListener(queues = "so56126654", concurrency = "100")
public String slowService(String in) throws InterruptedException {
Thread.sleep(5_000L);
return in.toUpperCase();
}
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
ExecutorService exec = Executors.newFixedThreadPool(100);
return args -> {
System.out.println(asyncTemplate.convertSendAndReceive("foo").get());
for (int i = 0; i < 1000; i++) {
int n = i;
exec.execute(() -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo" + n);
try {
System.out.println(future.get(10, TimeUnit.SECONDS));
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
e.printStackTrace();
}
catch (ExecutionException e) {
e.printStackTrace();
}
catch (TimeoutException e) {
e.printStackTrace();
}
});
}
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(ConnectionFactory connectionFactory) {
return new AsyncRabbitTemplate(connectionFactory, "", "so56126654", "so56126654-replies");
}
#Bean
public Queue queue() {
return new Queue("so56126654");
}
#Bean
public Queue reeplyQueue() {
return new Queue("so56126654-replies");
}
}

Netty Latency issue

I am a new user netty. I am using netty 4.0.19-Final version. I am load testing the EchoServer example with 50 clients. Below is my configuration. I am always getting latency of around 300 mcroseconds. I am trying to reduce the latency to around 100 microseconds. Is there anything i can try to achieve desired performance? All my clients are persistent clients and will send messages with a delay of 10 milliseconds.
ServerBootstrap b = new ServerBootstrap();
b.group(workerGroup)
.channel(NioServerSocketChannel.class)
.localAddress(NetUtil.LOCALHOST, Integer.valueOf(8080))
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.childOption(ChannelOption.SO_SNDBUF, 1045678)
.childOption(ChannelOption.SO_RCVBUF, 1045678)
.option(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.TCP_NODELAY, true)
// .handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline()
//.addLast(new LoggingHandler(LogLevel.INFO))
.addLast(new EchoServerHandler());
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
EchoServerHandler:
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ctx.write(msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
try to call ctx.writeAndFlush(...) Also you may need to adjust buffer sized etc.

How An ISO server can support concurrent requests?

I had implemented ISO SERVER by using ASCII channel and ASCII packager and listening on a port and giving response to ISO requests.
how can i make my server that accepts concurrent requests and send the response.
Please
if you are using Q2, just deploy QServer and set the minSessions and maxSessions which its default value is 0 and 100.
here example jPOS server that handle concurent request:
http://didikhari.web.id/java/jpos-client-receive-response-specific-port/
ISOServer works with a threadpool, so you can accept concurrent requests out of the box. Every socket connection is handled by its own thread. So, I think all you have to do is assign a ISORequestListener to your ISOServer to actually process your incoming messages.
Here's a test program taken from the jPOS guide:
public class Test implements ISORequestListener {
public Test () {
super();
}
public boolean process (ISOSource source, ISOMsg m) {
try {
m.setResponseMTI ();
m.set (39, "00");
source.send (m);
} catch (ISOException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return true;
}
public static void main (String[] args) throws Exception {
Logger logger = new Logger ();
logger.addListener (new SimpleLogListener (System.out));
ServerChannel channel = new XMLChannel (new XMLPackager());
((LogSource)channel).setLogger (logger, "channel");
ISOServer server = new ISOServer (8000, channel, null);
server.setLogger (logger, "server");
server.addISORequestListener (new Test ());
new Thread (server).start ();
}
}

Telnet client for cisco router

I am writing telnet client for cisco router using apache.commons.net.telnet. But i have problem. Here is code sample:
static TelnetClient telnetClient = new TelnetClient();
public static void main(String[] args) throws IOException {
setOptionHandlers();
telnetClient.connect("192.168.127.100");
read();
telnetClient.disconnect();
}
private static void setOptionHandlers() throws IOException {
ArrayList<TelnetOptionHandler> optionHandlers =
new ArrayList<TelnetOptionHandler>();
optionHandlers.add(new TerminalTypeOptionHandler("VT100", false, false, true, false));
optionHandlers.add(new EchoOptionHandler(true, false, true, false));
optionHandlers.add(new SuppressGAOptionHandler(true, true, true, true));
for (TelnetOptionHandler handler : optionHandlers) {
try {
telnetClient.addOptionHandler(handler);
}
catch (InvalidTelnetOptionException e) {
System.err.println("Error registering option handler "
+ handler.getClass().getSimpleName());
}
}
}
public static void write(byte[] data) throws IOException {
telnetClient.getOutputStream().write(data);
telnetClient.getOutputStream().flush();
}
public static void read() throws IOException {
System.out.println("Read");
byte[] buff = new byte[1024];
int read;
if((read = telnetClient.getInputStream().read(buff)) > 0) {
System.out.println(new String(buff, 0, read));
}
System.out.println("read="+read);
}
In some cases it works correctly and shows prompt for password entering. But is other cases it works incorrectly - hangs by reading from telnet input stream. Run conditions are the same. Why do I get this situation?
If anyone has tips for writing cisco telnet client, i'll be glad to hear them!
I can reproduce this problem every time.
The problem can be worked-around by changing your read buffer size to 1 byte.
This accounts for why the readUntil() function from Looking for Java Telnet emulator works, at it simply calls read() for 1 byte.
That said, does this indicate a bug in org.apache.commons.net.telnet.TelnetClient?
Edit: Rolled back to an earlier version of Commons Net and the problem disappeared !

Resources