How does apache Http11NioProtocol work? - tomcat7

I am trying to get benefints from NIO with Tomcat 7 Http11NioProtocol.
I just read some article about it :
Understanding the Tomcat NIO Connector
And looked through :tomcat docs
I started simple server with only SimpleServlet which counts number of concurrent threads, wait for 10 seconds and returns empty result :
public class SimpleServlet extends HttpServlet {
final static Logger logger = Logger.getLogger(SimpleServlet.class);
private static final AtomicInteger threadCntr = new AtomicInteger();
#Override
public void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
doGet(req, resp);
}
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response) {
logger.info("ProxyService doGet threads[" + threadCntr.incrementAndGet() + "]");
try {
Thread.sleep(10000L);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
logger.info("end processing");
threadCntr.decrementAndGet();
}
}
}
Configured tomcat's server.xml for using nio protocol:
<Connector port="8081" maxHttpHeaderSize="8192" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="200" minSpareThreads="5" maxSpareThreads="5"
enableLookups="false" redirectPort="8443" acceptCount="1"
connectionTimeout="20000" disableUploadTimeout="true" />
I start sending lightweight requests and monitor it using Java VisualVM:
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 1000; i++) {
new Thread(() -> {
executeHttp(URL, "test"); // impl hided
}).start();
Thread.sleep(10);
}
}
When I ran the test I saw only 200 threads used (and that's ok)
But client's requests couldn't be processes more than 200 per 10 secs
So, where is the difference between BIO?
Or Have I missed some config?
What is the main feature of Http11NioProtocol ?

Related

Decent approach to time OkHttpClient events for metrics?

I'd like to gather connection and request timing metrics for an OkHttpClient instance that calls a particular service. I'm wondering if this approach is correct, and whether my interpretation of the event types makes sense?
Timer callTimer = <new codahale timer>;
Timer connectTimer = <new codahale timer>;
Timer secureConnectTimer = <new codahale timer>;
Timer requestTimer = <new codahale timer>;
# this gets registered with my client
new EventListener() {
// see https://square.github.io/okhttp/events/#eventlistener for info on the ordering of these events
private final Map<Call, Timer.Context> secureConnectTimerContexts = Maps.newConcurrentMap();
private final Map<Call, Timer.Context> connectTimerContexts = Maps.newConcurrentMap();
private final Map<Call, Timer.Context> callTimerContexts = Maps.newConcurrentMap();
private final Map<Call, Timer.Context> requestTimerContexts = Maps.newConcurrentMap();
#Override
public void secureConnectStart(Call call) {
secureConnectTimerContexts.put(call, secureConnectTimer.time());
}
#Override
public void secureConnectEnd(Call call, #Nullable Handshake handshake) {
Timer.Context context = secureConnectTimerContexts.remove(call);
if (Objects.nonNull(context)) {
context.stop();
}
}
#Override
public void connectStart(Call call, InetSocketAddress inetSocketAddress, Proxy proxy) {
connectTimerContexts.put(call, connectTimer.time());
}
#Override
public void connectEnd(Call call, InetSocketAddress inetSocketAddress, Proxy proxy, #Nullable Protocol protocol) {
Timer.Context context = connectTimerContexts.remove(call);
if (Objects.nonNull(context)) {
context.stop();
}
}
#Override
public void connectionAcquired(Call call, Connection connection) {
requestTimerContexts.put(call, requestTimer.time());
}
#Override
public void connectionReleased(Call call, Connection connection) {
Timer.Context context = requestTimerContexts.remove(call);
if (context != null) {
context.stop();
}
}
#Override
public void connectFailed(Call call, InetSocketAddress inetSocketAddress, Proxy proxy,
#Nullable Protocol protocol, IOException ioe) {
Timer.Context context = connectTimerContexts.remove(call);
if (Objects.nonNull(context)) {
context.stop();
}
}
#Override
public void callStart(Call call) {
callTimerContexts.put(call, callTimer.time());
}
#Override
public void callEnd(Call call) {
callFinishedForMetrics(call);
}
#Override
public void callFailed(Call call, IOException ioe) {
callFinishedForMetrics(call);
}
private void callFinishedForMetrics(Call call) {
Timer.Context callTimerContext = callTimerContexts.remove(call);
if (callTimerContext != null) {
callTimerContext.stop();
}
requestTimerContexts.remove(call);
secureConnectTimerContexts.remove(call);
connectTimerContexts.remove(call);
}
}
You can use EventListener.Factory to create a unique listener instance for each Call. That way you don't need all the maps; the Timer.Context objects can just be instance fields of the call-bound EventListener.

Netty Parallel Handler Processing

Following recommendations elsewhere I am attempting to parallelize my final inbound handler in a Netty pipeline as such
public final class EchoServer {
private EventLoopGroup group = new NioEventLoopGroup();
private UnorderedThreadPoolEventExecutor workers = new UnorderedThreadPoolEventExecutor(10);
public void start(int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(group).channel(NioDatagramChannel.class).option(ChannelOption.SO_BROADCAST, true)
.handler(new ChannelInitializer<NioDatagramChannel>() {
#Override
protected void initChannel(NioDatagramChannel channel) throws Exception {
channel.pipeline().addLast(workers, new SimpleChannelInboundHandler<DatagramPacket>() {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket packet) throws Exception {
System.err.println(packet);
// Simulated database delay that I have to wait to occur before repsonding
Thread.sleep(1000);
ctx.write(new DatagramPacket(Unpooled.copiedBuffer("goodbye", StandardCharsets.ISO_8859_1), packet.sender()));
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
});
}
});
b.bind(port).sync().channel().closeFuture().await();
} finally {
group.shutdownGracefully();
}
}
public void stop() {
group.shutdownGracefully();
}
}
I have ten clients that connect concurrently, as a test, and I am measuring execution time for handling all the requests. As expected with the 1 second delay and sequential execution it takes just over 10 seconds. I am trying to get execution down to somewhere sub 2 seconds to prove parallel handling.
From what I understand adding the handler to the pipeline with an explicitly assigned executor is supposed to parallelize that handlers work across the thread in the executor.
Instead of seeing a increase in performance, what I am finding is that my client is not receiving the responses when I add the parallel processing. The thread sleep is there to simulate the potential time it will take to write the incoming data to a database. Am I doing something obviously wrong here?
I worked around the apparently lack of Netty support for doing final end UDP processing in parallel using standard java concurrency mechanisms.
public final class EchoServer {
private EventLoopGroup group = new NioEventLoopGroup();
private ExecutorService executors = Executors.newFixedThreadPool(10);
public void start(int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(group).channel(NioDatagramChannel.class).handler(new ChannelInitializer<NioDatagramChannel>() {
#Override
protected void initChannel(NioDatagramChannel channel) throws Exception {
channel.pipeline().addLast(new SimpleChannelInboundHandler<DatagramPacket>() {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket packet) throws Exception {
CompletableFuture.runAsync(() -> {
System.err.println(packet);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
ctx.writeAndFlush(new DatagramPacket(Unpooled.copiedBuffer("goodbye", StandardCharsets.ISO_8859_1),
packet.sender()));
}, executors);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
});
}
});
b.bind(port).sync().channel().closeFuture().await();
} finally {
group.shutdownGracefully();
}
}
public void stop() {
group.shutdownGracefully();
}
}

How to unit test Spring WebSocketStompClient

I've a Stompclient which connects to a Spring boot server and performs some subscriptions. The code coverage for this websocket client is at 0%. I can only find the code samples for how to unit test Spring boot Websocket server. But this is client side verifying the Stompclient is working fine. Please let me know if my question is missing any details.
Here's my sample connect method which I need to write unit test case for.
StompSession connect(String connectionUrl) throws Exception {
WebSocketClient transport = new StandardWebSocketClient();
WebSocketStompClient stompClient = new WebSocketStompClient(transport);
stompClient.setMessageConverter(new StringMessageConverter());
ListenableFuture<StompSession> stompSession = stompClient.connect(connectionUrl, new WebSocketHttpHeaders(), new MyHandler());
return stompSession.get();
}
Note: The client is part of a lightweight SDK so it cannot have heavy dependency for this unit test.
Thanks to Artem for suggestion I look into the Spring websocket test examples. Here's how I solved it for me, hope this helps someone.
public class WebSocketStompClientTests {
private static final Logger LOG = LoggerFactory.getLogger(WebSocketStompClientTests.class);
#Rule
public final TestName testName = new TestName();
#Rule
public ErrorCollector collector = new ErrorCollector();
private WebSocketTestServer server;
private AnnotationConfigWebApplicationContext wac;
#Before
public void setUp() throws Exception {
LOG.debug("Setting up before '" + this.testName.getMethodName() + "'");
this.wac = new AnnotationConfigWebApplicationContext();
this.wac.register(TestConfig.class);
this.wac.refresh();
this.server = new TomcatWebSocketTestServer();
this.server.setup();
this.server.deployConfig(this.wac);
this.server.start();
}
#After
public void tearDown() throws Exception {
try {
this.server.undeployConfig();
}
catch (Throwable t) {
LOG.error("Failed to undeploy application config", t);
}
try {
this.server.stop();
}
catch (Throwable t) {
LOG.error("Failed to stop server", t);
}
try {
this.wac.close();
}
catch (Throwable t) {
LOG.error("Failed to close WebApplicationContext", t);
}
}
#Configuration
static class TestConfig extends WebSocketMessageBrokerConfigurationSupport {
#Override
protected void registerStompEndpoints(StompEndpointRegistry registry) {
// Can't rely on classpath detection
RequestUpgradeStrategy upgradeStrategy = new TomcatRequestUpgradeStrategy();
registry.addEndpoint("/app")
.setHandshakeHandler(new DefaultHandshakeHandler(upgradeStrategy))
.setAllowedOrigins("*")
.withSockJS();
}
#Override
public void configureMessageBroker(MessageBrokerRegistry configurer) {
configurer.setApplicationDestinationPrefixes("/publish");
configurer.enableSimpleBroker("/topic", "/queue");
}
}
#Test
public void testConnect() {
TestStompClient stompClient = TestStompClient.connect();
assert(true);
}
}

How to: Implement a BatchMessageListenerContainer for bulk consuming a JMS queue

I recently faced the need for a JMS consumer in Spring Integration - capable of consuming burst of high volume without stressing my target Oracle database with too many commits.
The DefaultMessageListenerContainer does not seem to support anything but message by message transactions.
I googled for solutions and found a couple - but the lot of them suffered from being implemented not by inheritance from DMLC but rather by cloning and modifying the original source code from same - making it vulnerable to break in case I later wish to move to a more recent version of spring-jms. Also the code being cloned referenced private properties of DMLC which consequently had to be left out. And to make it all work also a couple of interfaces and a custom message listener was needed. All in all I did not feel comfortable.
So - what to do?
Well - this is a simple and compact solution that is entirely based on a single class derived from DefaultMessageListenerContainer.
I have only tested with message-driven-channel-adapter and a ChainedTransactionManager though - since this is sort of the basic scenario when needing to do stuff like this.
This is the code:
package dk.itealisten.myservice.spring.components;
import org.springframework.jms.listener.DefaultMessageListenerContainer;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import java.util.ArrayList;
import java.util.Enumeration;
public class BatchMessageListenerContainer extends DefaultMessageListenerContainer {
public static final int DEFAULT_BATCH_SIZE = 100;
public int batchSize = DEFAULT_BATCH_SIZE;
/**
* Override the method receiveMessage to return an instance of BatchMessage - an inner class being declared further down.
*/
#Override
protected Message receiveMessage(MessageConsumer consumer) throws JMSException {
BatchMessage batch = new BatchMessage();
while (!batch.releaseAfterMessage(super.receiveMessage(consumer))) ;
return batch.messages.size() == 0 ? null : batch;
}
/**
* As BatchMessage implements the javax.jms.Message interface it fits perfectly into the DMLC - only caveat is that SimpleMessageConverter dont know how to convert it to a Spring Integration Message - but that can be helped.
* As BatchMessage will only serve as a container to carry the actual javax.jms.Message's from DMLC to the MessageListener it need not provide meaningful implementations of the methods of the interface as long as they are there.
*/
protected class BatchMessage implements Message {
public ArrayList<Message> messages = new ArrayList<Message>();
/**
* Add message to the collection of messages and return true if the batch meets the criteria for releasing it to the MessageListener.
*/
public boolean releaseAfterMessage(Message message) {
if (message != null) {
messages.add(message);
}
// Are we ready to release?
return message == null || messages.size() >= batchSize;
}
// Below is only dummy-implementations of the abstract methods of javax.jms.Message
#Override
public String getJMSMessageID() throws JMSException {
return null;
}
#Override
public void setJMSMessageID(String s) throws JMSException {
}
#Override
public long getJMSTimestamp() throws JMSException {
return 0;
}
#Override
public void setJMSTimestamp(long l) throws JMSException {
}
#Override
public byte[] getJMSCorrelationIDAsBytes() throws JMSException {
return new byte[0];
}
#Override
public void setJMSCorrelationIDAsBytes(byte[] bytes) throws JMSException {
}
#Override
public void setJMSCorrelationID(String s) throws JMSException {
}
#Override
public String getJMSCorrelationID() throws JMSException {
return null;
}
#Override
public Destination getJMSReplyTo() throws JMSException {
return null;
}
#Override
public void setJMSReplyTo(Destination destination) throws JMSException {
}
#Override
public Destination getJMSDestination() throws JMSException {
return null;
}
#Override
public void setJMSDestination(Destination destination) throws JMSException {
}
#Override
public int getJMSDeliveryMode() throws JMSException {
return 0;
}
#Override
public void setJMSDeliveryMode(int i) throws JMSException {
}
#Override
public boolean getJMSRedelivered() throws JMSException {
return false;
}
#Override
public void setJMSRedelivered(boolean b) throws JMSException {
}
#Override
public String getJMSType() throws JMSException {
return null;
}
#Override
public void setJMSType(String s) throws JMSException {
}
#Override
public long getJMSExpiration() throws JMSException {
return 0;
}
#Override
public void setJMSExpiration(long l) throws JMSException {
}
#Override
public long getJMSDeliveryTime() throws JMSException {
return 0;
}
#Override
public void setJMSDeliveryTime(long l) throws JMSException {
}
#Override
public int getJMSPriority() throws JMSException {
return 0;
}
#Override
public void setJMSPriority(int i) throws JMSException {
}
#Override
public void clearProperties() throws JMSException {
}
#Override
public boolean propertyExists(String s) throws JMSException {
return false;
}
#Override
public boolean getBooleanProperty(String s) throws JMSException {
return false;
}
#Override
public byte getByteProperty(String s) throws JMSException {
return 0;
}
#Override
public short getShortProperty(String s) throws JMSException {
return 0;
}
#Override
public int getIntProperty(String s) throws JMSException {
return 0;
}
#Override
public long getLongProperty(String s) throws JMSException {
return 0;
}
#Override
public float getFloatProperty(String s) throws JMSException {
return 0;
}
#Override
public double getDoubleProperty(String s) throws JMSException {
return 0;
}
#Override
public String getStringProperty(String s) throws JMSException {
return null;
}
#Override
public Object getObjectProperty(String s) throws JMSException {
return null;
}
#Override
public Enumeration getPropertyNames() throws JMSException {
return null;
}
#Override
public void setBooleanProperty(String s, boolean b) throws JMSException {
}
#Override
public void setByteProperty(String s, byte b) throws JMSException {
}
#Override
public void setShortProperty(String s, short i) throws JMSException {
}
#Override
public void setIntProperty(String s, int i) throws JMSException {
}
#Override
public void setLongProperty(String s, long l) throws JMSException {
}
#Override
public void setFloatProperty(String s, float v) throws JMSException {
}
#Override
public void setDoubleProperty(String s, double v) throws JMSException {
}
#Override
public void setStringProperty(String s, String s1) throws JMSException {
}
#Override
public void setObjectProperty(String s, Object o) throws JMSException {
}
#Override
public void acknowledge() throws JMSException {
}
#Override
public void clearBody() throws JMSException {
}
#Override
public <T> T getBody(Class<T> aClass) throws JMSException {
return null;
}
#Override
public boolean isBodyAssignableTo(Class aClass) throws JMSException {
return false;
}
}
}
Below is a sample showing how it could be used in a Spring application context:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:jms="http://www.springframework.org/schema/integration/jms"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-4.0.xsd
http://www.springframework.org/schema/integration
http://www.springframework.org/schema/integration/spring-integration-4.0.xsd
http://www.springframework.org/schema/integration/jms
http://www.springframework.org/schema/integration/jms/spring-integration-jms-4.0.xsd">
<!-- Plug in the BatchMessageListenerContainer in a message-driven-channel-adapter -->
<jms:message-driven-channel-adapter container-class="dk.itealisten.myservice.spring.components.BatchMessageListenerContainer"
acknowledge="transacted"
channel="from.mq"
concurrent-consumers="5"
max-concurrent-consumers="15"
connection-factory="jmsConnectionFactory"
transaction-manager="transactionManager"
destination="my.mq.queue"
/>
<!-- Flow processing the BatchMessages being posted on the "from.mq" channel -->
<int:chain input-channel="from.mq" output-channel="nullChannel">
<int:splitter expression="payload.messages" />
<!-- This is where we deal with conversion to spring messages as the payload is now a single standard javax.jms.Message implementation -->
<int:transformer ref="smc" method="fromMessage"/>
<!-- And finally we persist -->
<int:service-activator ref="jdbcPublisher" method="persist"/>
</int:chain>
<!-- Various supporting beans -->
<!-- A bean to handle the database persistance -->
<bean id="jdbcPersistor" class="dk.itealisten.myservice.spring.components.JdbcPersistor" p:dataSource-ref="dataSource" />
<!-- A bean to handle the conversion that could not take place in the MessageListener as it don't know how to convert a BatchMessage -->
<bean id="smc" class="org.springframework.jms.support.converter.SimpleMessageConverter"/>
<!-- Transaction manager must make sure messages are committed outbound (JDBC) before cleaned up inbound (JMS). -->
<bean id="transactionManager" class="org.springframework.data.transaction.ChainedTransactionManager">
<constructor-arg name="transactionManagers">
<list>
<bean class="org.springframework.jms.connection.JmsTransactionManager" p:connectionFactory-ref="jmsConnectionFactory" />
<bean class="org.springframework.jdbc.datasource.DataSourceTransactionManager" p:dataSource-ref="dataSource" />
</list>
</constructor-arg>
</bean>

netty 4 ChannelInboundHandlerAdapter.channelReadComplete called twice

when you use netty (4.0.23, java 1.7u67, win8x64) as a client ChannelInboundHandlerAdapter.channelReadComplete(...) should be called 1 time once netty completes reading response, right?
Trying different sites, it's always called twice:
#Test
public void testDoubleReadComplete() throws Exception {
final String host = "www.google.de";
final CountDownLatch count = new CountDownLatch(20);
bootstrap = new Bootstrap()
.group(new NioEventLoopGroup())
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ChannelInboundHandlerAdapter() {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
String request = String.format(
"GET / HTTP/1.1\n" +
"Host: " + host +"\n" +
"\n\n"
);
System.out.println("sending...");
System.out.println(request);
ByteBuf req = Unpooled.wrappedBuffer(request.getBytes(Charset.defaultCharset()));
ctx.writeAndFlush(req);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
System.err.println("777 read complete");
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf resp = (ByteBuf) msg;
count.countDown();
System.out.printf("****************************************************>>>>> %s%n", Thread.currentThread().getName());
System.out.println(resp.toString(Charset.defaultCharset()));
System.out.println("<<<<<****************************************************");
resp.release();
}
});
}
});
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, 80));
future.awaitUninterruptibly();
count.await(5, TimeUnit.SECONDS);
}
outputs 777 read complete 2 times, why?
I'm not sure that the channelReadComplete is intended for how you are trying to use it.
Read the javadoc for the method. The data is not guaranteed to arrive all at once and so netty reads the data as it arrives in a non-blocking manner. This methods notifies you when it has finished the current read operation which is not necessarily the last read operation.
I'm not exactly sure about your use case but here is some non-production code that may be closer to what you are trying to accomplish?
#Test
public void testDoubleReadComplete() throws Exception {
final String host = "www.google.de";
final CountDownLatch count = new CountDownLatch(20);
Bootstrap bootstrap = new Bootstrap()
.group(new NioEventLoopGroup())
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ChannelInboundHandlerAdapter() {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
String request = String.format(
"GET / HTTP/1.1\r\n" +
"Host: " + host +"\r\n" +
"Connection: close\r\n" +
"\r\n"
);
System.out.println("sending...");
System.out.println(request);
ByteBuf req = Unpooled.wrappedBuffer(request.getBytes(Charset.defaultCharset()));
ctx.writeAndFlush(req);
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
System.err.println("777 read complete");
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf resp = (ByteBuf) msg;
count.countDown();
System.out.printf("****************************************************>>>>> %s%n", Thread.currentThread().getName());
System.out.println(resp.toString(Charset.defaultCharset()));
System.out.println("<<<<<****************************************************");
resp.release();
}
});
}
});
found the cause and solution: ChannelOption.MAX_MESSAGES_PER_READ
So now I'm testing with .option(ChannelOption.MAX_MESSAGES_PER_READ, Integer.MAX_VALUE) and so far so good - channelReadComplete is called once at the end of response as I need

Resources