I am a new user netty. I am using netty 4.0.19-Final version. I am load testing the EchoServer example with 50 clients. Below is my configuration. I am always getting latency of around 300 mcroseconds. I am trying to reduce the latency to around 100 microseconds. Is there anything i can try to achieve desired performance? All my clients are persistent clients and will send messages with a delay of 10 milliseconds.
ServerBootstrap b = new ServerBootstrap();
b.group(workerGroup)
.channel(NioServerSocketChannel.class)
.localAddress(NetUtil.LOCALHOST, Integer.valueOf(8080))
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.childOption(ChannelOption.SO_SNDBUF, 1045678)
.childOption(ChannelOption.SO_RCVBUF, 1045678)
.option(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.TCP_NODELAY, true)
// .handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline()
//.addLast(new LoggingHandler(LogLevel.INFO))
.addLast(new EchoServerHandler());
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
EchoServerHandler:
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ctx.write(msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
try to call ctx.writeAndFlush(...) Also you may need to adjust buffer sized etc.
Related
javax.JMS version 2.0.1
Provider : ibm.mq v9.0
Framework : Java Spring boot
From what I know, onMessage() is asynchronous. I am successfully retrying the message send. However, the re-sending of messages happens instantaneously after a message failure. Ideally I want the retry to happen in a sliding window style eg. First retry after 20 seconds, second retry after 40 etc.
How can I achieve this without a Thread.Sleep() which, I presume, will block the entire Java thread and is not something I want at all ?
Code is something like this
final int TIME_TO_WAIT = 20;
public void onMessage(Message , message)
{
:
:
int t = message.getIntProperty("JMSXDeliveryCount");
if(t > 1)
{
// Figure out a way to wait for (TIME_TO_WAIT * t)
}
}
catch(Exception e)
{
// Do some logging/cleanup etc.
throw new RunimeException(e);// this causes a message retry
}
I would suggest you use exponential backoff in the retry logic, but you would need to use the Delivery Delay feature.
Define a custom JmsTemplate that will use delay property from the message, you should add retry count in the message property as well so that you can delay as per your need like 20, 40, 80, 160, etc
public class DelayedJmsTemplate extends JmsTemplate {
public static String DELAY_PROPERTY_NAME = "deliveryDelay";
#Override
protected void doSend(MessageProducer producer, Message message) throws JMSException {
long delay = -1;
if (message.propertyExists(DELAY_PROPERTY_NAME)) {
delay = message.getLongProperty(DELAY_PROPERTY_NAME);
}
if (delay >= 0) {
producer.setDeliveryDelay(delay);
}
if (isExplicitQosEnabled()) {
producer.send(message, getDeliveryMode(), getPriority(), getTimeToLive());
} else {
producer.send(message);
}
}
}
Define Components, that will have the capability fo re-enqueue of the message, you can define this interface in the base message listener. The handleException method should do all the tasks of enqueue and computing delay etc. You may not always interested in enqueuing, in some cases, you would discard messages as well.
You can see a similar post-processing logic here
https://github.com/sonus21/rqueue/blob/4c9c5c88f02e5cf0ac4b16129fe5b880411d7afc/rqueue-core/src/main/java/com/github/sonus21/rqueue/listener/PostProcessingHandler.java
#Component
#Sl4j
public class MessageListener {
private final JmsTemplate jmsTemplate;
#Autowired
public MessageListener(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
#JmsListener(destination = "myDestination")
public void onMessage(Message message) throws JMSException {
try {
// do something
} catch (Exception e) {
handleException("myDestination", message, e);
}
}
// Decide whether the message should be ignored due to many retries etc
private boolean shouldBeIgnored(String destination, Message message) {
return false;
}
// add logic to compute delay
private long getDelay(String destination, Message message, int deliveryCount) {
return 100L;
}
private void handleException(String destination, Message message, Exception e) throws JMSException {
if (shouldBeIgnored(destination, message)) {
log.info("destination: {}, message: {} is ignored ", destination, message, e);
return;
}
if (message.propertyExists("JMSXDeliveryCount")) {
int t = message.getIntProperty("JMSXDeliveryCount");
long delay = getDelay(destination, message, t + 1);
message.setLongProperty(DELAY_PROPERTY_NAME, delay);
message.setIntProperty("JMSXDeliveryCount", t + 1);
jmsTemplate.send(destination, session -> message);
} else {
// no delivery count, is this the first message or should be ignored?
}
}
}
Redis timeout errors occurring in my application when processing thousands of messages
I have an application which takes in contracts, creates a key and value in redis for each contract before it's being sent to Kafka. If I get the same contract, application checks if the key exists and the value is same to the previous contract, the messages is not processed further.
I have tried increasing the connectTimeout and connectRetry in the connection string. Have also tried setting the minthreads in the program main method.
The outer application which calls Redis within is configured by setting min threads and the connection sting with increased connectTimeout and connectRety
public static void Main(string[] args)
{
System.Threading.ThreadPool.SetMinThreads(1000, 300);
CreateWebHostBuilder(args).Build().Run();
}
"Redis": {
"ConnectionString": "localhost:6379",connectTimeout=10000,connectRetry=5
}
public class RedisConnectionFactory
{
public static ConnectionMultiplexer GetConnection(string connectionString)
{
var options = ConfigurationOptions.Parse(connectionString);
return ConnectionMultiplexer.Connect(options);
}
}
public interface IRedisDataAgent
{
string GetStringValue(string key);
void SetStringValue(string key, string value);
void DeleteStringValue(string key);
}
public class RedisDataAgent : IRedisDataAgent
{
private static IDatabase _database;
public RedisDataAgent(IConfiguration configuration)
{
var connection = RedisConnectionFactory.GetConnection(configuration.GetSection("Redis")["ConnectionString"]);
_database = connection.GetDatabase();
}
public string GetStringValue(string key)
{
return _database.StringGet(key);
}
public void SetStringValue(string key, string value)
{
_database.StringSet(key, value);
}
public void DeleteStringValue(string key)
{
_database.KeyDelete(key);
}
}
Application is expected not to throw any timeout errors no matter how many thousands of messages being processed.
I have increased the min threads to 1000, but still throwing errors. No matter how much I increase, it's still throwing errors
An unhandled exception was thrown by the application.
StackExchange.Redis.RedisTimeoutException: Timeout performing SET (5000ms),
IOCP: (Busy=0,Free=1000,Min=300,Max=1000), WORKER: (Busy=1018,Free=31749,Min=1000,Max=32767)
I created simple client and server. Client sends rpc requests:
RabbitTemplate template.convertSendAndReceive(...) ;
Server receive it, and answers back:
#RabbitListener(queues = "#{queue.getName()}")
public Object handler(#Payload String key)...
Then I make client send rpc requests asynchronously, simultaneously(which produces lot of concurrent rpc requests).
And unexpectedly receive an error:
org.springframework.amqp.AmqpResourceNotAvailableException: The channelMax limit is reached. Try later.
at org.springframework.amqp.rabbit.connection.SimpleConnection.createChannel(SimpleConnection.java:59)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createBareChannel(CachingConnectionFactory.java:1208)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.access$200(CachingConnectionFactory.java:1196)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.doCreateBareChannel(CachingConnectionFactory.java:599)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createBareChannel(CachingConnectionFactory.java:582)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getCachedChannelProxy(CachingConnectionFactory.java:552)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getChannel(CachingConnectionFactory.java:534)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.access$1400(CachingConnectionFactory.java:99)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createChannel
Rabbitmq client seems create too many channels. How to fix it?
And why my client create them so many?
Channels are cached so there should only be as many channels as there are actual RPC calls in process.
You may need to increase the channel max setting on the broker.
EDIT
If your RPC calls are long-lived, you can reduce the time the channel is used by using the AsyncRabbitTemplate with an explicit reply queue, and avoid using the direct reply-to feature.
See the documentation.
EDIT2
Here is an example using the AsyncRabbitTemplate; it sends 1000 messages on 100 threads (and the consumer has 100 threads).
The total number of channels used was 107 - 100 for the consumers and only 7 were used for sending.
#SpringBootApplication
public class So56126654Application {
public static void main(String[] args) {
SpringApplication.run(So56126654Application.class, args);
}
#RabbitListener(queues = "so56126654", concurrency = "100")
public String slowService(String in) throws InterruptedException {
Thread.sleep(5_000L);
return in.toUpperCase();
}
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
ExecutorService exec = Executors.newFixedThreadPool(100);
return args -> {
System.out.println(asyncTemplate.convertSendAndReceive("foo").get());
for (int i = 0; i < 1000; i++) {
int n = i;
exec.execute(() -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo" + n);
try {
System.out.println(future.get(10, TimeUnit.SECONDS));
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
e.printStackTrace();
}
catch (ExecutionException e) {
e.printStackTrace();
}
catch (TimeoutException e) {
e.printStackTrace();
}
});
}
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(ConnectionFactory connectionFactory) {
return new AsyncRabbitTemplate(connectionFactory, "", "so56126654", "so56126654-replies");
}
#Bean
public Queue queue() {
return new Queue("so56126654");
}
#Bean
public Queue reeplyQueue() {
return new Queue("so56126654-replies");
}
}
I'm writing a local HTTP server based on Netty. When I make a stress test, I'm limited on 400 requests/second.
To optimize my server, I've written a simple server based on Netty, that just sends "Hello World" to the client, and I launched a stress test with Gatling 2, and with this server, I've got the same result (limited to 400 req/s).
I use Yourkit for profiling, there is no extra GC activity, and my open/closed sockets are limited to 480 sockets/s.
I work with a MacBook Pro, with 4 cores, 16 GB of RAM, and I use Netty 4.1.
I'm surprised to be limited at 400 req/s, because the result of other benchmark tests show >20 000 req/s, or more. I understand that there are hardware limits, but 400 req/s, for a sending "hello World" on a 4 cores + 16 GB of Ram is very low.
Thank you in advance for your help, I don't know where to begin to optimize my Netty code.
Are there any concrete guidelines of optimizing Netty?
Here the source code of my hello world server, followed by the handler of my connections:
public class TestServer {
private static final Logger logger = LogManager.getLogger("TestServer");
int nbSockets = 0 ;
EventLoopGroup pool = new NioEventLoopGroup() ;
private void init(int port) {
EventLoopGroup bossGroup = new NioEventLoopGroup(100) ;
try {
long t1 = System.currentTimeMillis() ;
ServerBootstrap b = new ServerBootstrap().group(bossGroup);
b.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("decoder", new HttpRequestDecoder(8192, 8192 * 2,
8192 * 2));
ch.pipeline().addLast("encoder", new HttpResponseEncoder());
ch.pipeline().addLast(new TestServerHandler(TestServer.this));
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
System.err.println("Error");
super.exceptionCaught(ctx,cause);
}
})
.option(ChannelOption.SO_BACKLOG, 100000)
.option(ChannelOption.SO_KEEPALIVE,false)
.option(ChannelOption.TCP_NODELAY,false)
.option(ChannelOption.SO_REUSEADDR,true)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS,10000)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2);;
scheduler.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
System.err.println(nbSockets);
nbSockets = 0 ;
}
},1, 1,TimeUnit.SECONDS) ;
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(port).sync();
f.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
bossGroup.shutdownGracefully();
System.err.println("Coucou");
}
}
public static void main(String[] args) {
TestServer testServer = new TestServer() ;
testServer.init(8888);
}
}
and here is the source code of my handler:
public class TestServerHandler extends ChannelInboundHandlerAdapter {
private final TestServer testServer;
public TestServerHandler(TestServer testServer) {
this.testServer = testServer ;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
try {
process(ctx, msg);
} catch (Throwable e) {
e.printStackTrace();
}
}
public void process(ChannelHandlerContext ctx, Object msg) throws Exception {
ctx.channel().writeAndFlush(buildHttpResponse()).addListener(new GenericFutureListener<Future<? super Void>>() {
#Override
public void operationComplete(Future<? super Void> future) throws Exception {
ctx.channel().close() ;
testServer.nbSockets ++ ;
}
}) ;
}
public DefaultFullHttpResponse buildHttpResponse() {
String body = "hello world" ;
byte[] bytes = body.getBytes(Charset.forName("UTF-8"));
ByteBuf byteContent = Unpooled.copiedBuffer(bytes);
HttpResponseStatus httpResponseStatus =HttpResponseStatus.OK;
DefaultFullHttpResponse httpResponse = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
httpResponseStatus, byteContent);
return httpResponse;
}
}
You've disabled keep-alive and are closing the connection on every request, so I suspect you spend most of your time opening and closing HTTP connections.
because the result of other benchmark tests show >20 000 req/s, or more
Which other benchmarks are you referring to? There's a very good chance they were both pooling connections, and using HTTP pipelining, hence a very different usage from yours.
Back to your original question (how to optimize Netty), there's two kind of things you could do:
Micro-optimize allocations: using pooled ByteBuffers, or even better computing them only once
Switch to native epoll transport (Linux only)
But all those improvement won't probably amount to much compare to connection handling.
I am using httpcomponenets nio server to handle post request.
Below is the sample code. It gets the complete data in byte array using EntityUtils.toByteArray(). This fails if the requester sends a large file.
I couldnt figure out how to read the data in the request in chunks.
HttpEntity.getContent().read() always returns null
public static void main(String[] args) throws Exception {
int port = 8280;
// Create HTTP protocol processing chain
HttpProcessor httpproc = HttpProcessorBuilder.create()
.add(new ResponseDate())
.add(new ResponseServer("Test/1.1"))
.add(new ResponseContent())
.add(new ResponseConnControl()).build();
// Create request handler registry
UriHttpAsyncRequestHandlerMapper reqistry = new UriHttpAsyncRequestHandlerMapper();
// Register the default handler for all URIs
reqistry.register("/test*", new RequestHandler());
// Create server-side HTTP protocol handler
HttpAsyncService protocolHandler = new HttpAsyncService(httpproc, reqistry) {
#Override
public void connected(final NHttpServerConnection conn) {
System.out.println(conn + ": connection open");
super.connected(conn);
}
#Override
public void closed(final NHttpServerConnection conn) {
System.out.println(conn + ": connection closed");
super.closed(conn);
}
};
// Create HTTP connection factory
NHttpConnectionFactory<DefaultNHttpServerConnection> connFactory;
connFactory = new DefaultNHttpServerConnectionFactory(
ConnectionConfig.DEFAULT);
// Create server-side I/O event dispatch
IOEventDispatch ioEventDispatch = new DefaultHttpServerIODispatch(protocolHandler, connFactory);
// Set I/O reactor defaults
IOReactorConfig config = IOReactorConfig.custom()
.setIoThreadCount(1)
.setSoTimeout(3000)
.setConnectTimeout(3000)
.build();
// Create server-side I/O reactor
ListeningIOReactor ioReactor = new DefaultListeningIOReactor(config);
try {
// Listen of the given port
ioReactor.listen(new InetSocketAddress(port));
// Ready to go!
ioReactor.execute(ioEventDispatch);
} catch (InterruptedIOException ex) {
System.err.println("Interrupted");
} catch (IOException e) {
System.err.println("I/O error: " + e.getMessage());
}
System.out.println("Shutdown");
}
public static class RequestHandler implements HttpAsyncRequestHandler<HttpRequest> {
public void handleInternal(HttpRequest httpRequest, HttpResponse httpResponse, HttpContext httpContext) throws HttpException, IOException {
HttpEntity entity = null;
if (httpRequest instanceof HttpEntityEnclosingRequest)
entity = ((HttpEntityEnclosingRequest)httpRequest).getEntity();
byte[] data;
if (entity == null) {
data = new byte [0];
} else {
data = EntityUtils.toByteArray(entity);
}
System.out.println(new String(data));
httpResponse.setEntity(new StringEntity("success response"));
}
#Override public HttpAsyncRequestConsumer<HttpRequest> processRequest(HttpRequest request, HttpContext context) throws HttpException, IOException {
return new BasicAsyncRequestConsumer();
}
#Override
public void handle(HttpRequest request, HttpAsyncExchange httpExchange, HttpContext context) throws HttpException, IOException {
HttpResponse response = httpExchange.getResponse();
handleInternal(request, response, context);
httpExchange.submitResponse(new BasicAsyncResponseProducer(response));
}
}
Please consider implementing a custom AbstractAsyncRequestConsumer instead of BasicAsyncRequestConsumer if you want to have full control over request processing.
You might use these classes as a starting point [1][2]. Please note these are response consumers though one can use the same approach to create custom request consumers:
[1] http://hc.apache.org/httpcomponents-asyncclient-4.1.x/httpasyncclient/xref/org/apache/http/nio/client/methods/AsyncCharConsumer.html
[2] http://hc.apache.org/httpcomponents-asyncclient-4.1.x/httpasyncclient/xref/org/apache/http/nio/client/methods/AsyncByteConsumer.html