Thread management in Jetty Server 9.3.X - jetty-9

I was wondering that How does Jetty Server calculates and manages Acceptor,selector and request thread and what is the correlation between them?

Selectors / Acceptor are calculated based on below formula.
int selectorCount=selectors>0?selectors:Math.max(1,Math.min(4,Runtime.getRuntime().availableProcessors()/2))
int selectorCount=acceptors=Math.max(1, Math.min(4,cores/8));

Related

Why did Kafka Consumer re-processed all the records since last 2 months?

In one of the instances, when the consumer service was restarted, it leads to re-processing all the records which were sent to Kafka.
Kafka Broker: 0.10.0.1
Kafka producer Service: Springboot version 1.4.3.Release
Kafka Consumer Springboot Service: Springboot version 2.2.0.Release
Now for investigating this issue, I want to recreate this scenario again in the dev/local environment which is not happening!!!
What can be the probable cause?
How to check, if the record once is processed from Consumer Side is committed when we send Acknowledgement.acknowledge();
Consumer - properties
Enable Auto commit = false;
Auto offset Reset = earliest;
max poll records = 1;
max poll interval ms config = I am calculating the value of this parameter at runtime from the formula ==>> (number_of_retries * x * 2) <= INTEGER.MaxValue
Retry Policy - Simple
number of retries = 3;
interval between retries = x (millis)
I am creating Topics at runtime on consumer side via beans NewTopic(topic_name, 1, (**short**)1)
There are 2 Kafka clusters and 1 zookeeper instances running.
Any help would be appreciated
That broker is very old; if a consumer receives no records for 24 hours, the offsets were removed and restarting the consumer would cause it to reprocess all the records.
With newer brokers, it was changed to 7 days and the consumer has to be stopped for 7 days for the offsets to be removed.
Spring Boot 1.4.x (and even 1.5.x, 2.0.x) is no longer supported; the current version is 2.3.1.
You should upgrade to a newer broker and a more recent Spring Boot version.

ConnectionPool: pool is empty - increase either maxPoolSize or borrowConnectionTimeout

I was facing this issue for my springboot application that connects to a DB and MQ, and uses Atomikos Transaction manager.
com.atomikos.jms.AtomikosJMSException|Connection pool exhausted - try increasing 'maxPoolSize' and/or 'borrowConnectionTimeout' on the AtomikosConnectionFactoryBean.
com.atomikos.datasource.pool.PoolExhaustedException: ConnectionPool: pool is empty - increase either maxPoolSize or borrowConnectionTimeout
at com.atomikos.datasource.pool.ConnectionPool.waitForAtLeastOneAvailableConnection(ConnectionPool.java:326)
at com.atomikos.datasource.pool.ConnectionPool.findOrWaitForAnAvailableConnection(ConnectionPool.java:144)
at com.atomikos.datasource.pool.ConnectionPool.borrowConnection(ConnectionPool.java:132)
at com.atomikos.datasource.pool.ConnectionPoolWithSynchronizedValidation.borrowConnection(ConnectionPoolWithSynchronizedValidation.java:23)
at com.atomikos.jms.AtomikosConnectionFactoryBean.createConnection(AtomikosConnectionFactoryBean.java:601)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:196)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.access$100(AbstractPollingMessageListenerContainer.java:77)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer$MessageListenerContainerResourceFactory.createConnection(AbstractPollingMessageListenerContainer.java:490)
at org.springframework.jms.connection.ConnectionFactoryUtils.doGetTransactionalSession(ConnectionFactoryUtils.java:325)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:281)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:245)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1189)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1179)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1076)
at java.lang.Thread.run(Thread.java:748)
I tried printing the maxPoolSize and found that it is 1. This page came across in between (https://www.atomikos.com/Documentation/ConfiguringJms) and I found the line where they increased the MaxPoolSize to 5. I just tried setting it to 2 and it worked.
AtomikosConnectionFactoryBean xaConnectionFactory = new AtomikosConnectionFactoryBean();
xaConnectionFactory.setXaConnectionFactory(ibmMQXAConnectionFactory);
xaConnectionFactory.setMaxPoolSize(2);
Can someone help me to understand what should be the ideal poolsize. what it is for etc?
In order to process messages Atomikos uses DB and JMS connections (in your case).
These connections are taken from the pools of available connections. To get the idea why connection pools are needed, please follow this link as a starting point - Connection_pool
To put it simple - in order to process one message at a time Atomikos needs one DB and one JMS connection/session. So if you plan to process 10 messages in parallel, each connection pool size must be at least 10 (10 for DB and 10 for JMS connection pools respectively).

Jco Adapter pooling performance deadlock?

We're running an enterprise scale SAP application with front-end springboot clients connecting via Jco adapter 3.0 on Oracle VM using the connection pool (size 100). We're experiencing unsystematic long-running requests > 10s that are not visible in the SAP application server log, i.e. the bottleneck does not appear to be on SAP side.
Looking at the trace files (level 4) for an example request we can see that the time seems lost when the adapter thread tries to get the client from the pool (other threads continue execution, removed the irrelevant threads for clarity):
[20:05:50:259]: [JCoAPI] JCoContext.isStateful(P-foo-CPIC0) in session ID Client-53-1 returns false
[20:05:50:259]: [JCoAPI] JCoContext.begin(P-foo-CPIC0) in session ID Client-53-1
[20:05:50:259]: [JCoAPI] Started context for session Client-53-1
[20:05:50:259]: [JCoAPI] JCoContext.begin() for destination PFOO_200 (P-foo-CPIC0) on context with id Client-53-1; current state counter is 1
[20:05:50:259]: [JCoAPI] destination PFOO_200 destinationID=P-foo-CPIC0 executes Z_foo sessionID=Client-53-1, threadID=0x35
[20:05:50:259]: [JCoAPI] Context.getConnection on destination PFOO_200 (state: destination = STATEFUL, default = STATELESS)
[20:05:50:259]: [JCoAPI] PoolingFactory.getClient() on pool P-foo-CPIC0
--> time lost here
[20:06:20:840]: [JCoAPI] PoolingFactory.getClient() returns handle [3/84977415]
[20:06:20:840]: [JCoAPI] Context.getConnection on destination PFOO_200 nothing found in the context - got client from ConnectionManager [3/84977415]
[20:06:20:840]: [JCoAPI] JCoClient before execute(Z_foo) on handle [3/84977415]
[20:06:20:840]: [JCoRFC] Executing function Z_foo on handle [3/84977415]
[20:06:20:866]: [JCoAPI] JCoClient after execute(Z_foo) on handle [3/84977415] returns after 26 ms
[20:06:20:866]: [JCoAPI] Context.releaseConnection on destination PFOO_200 [3/84977415]
[20:06:20:867]: [JCoAPI] JCoContext.end(P-foo-CPIC0) in session ID Client-53-1
[20:06:20:867]: [JCoAPI] PoolingFactory.releaseClient() handle [3/84977415] into pool P-foo-CPIC0 [pool size: 3, peak limit: 100, waiting threads: 0, currently used: 1]
[20:06:20:879]: [JCoAPI] Finished context for session Client-53-1
[20:06:20:879]: [JCoAPI] JCoContext.end() for destination PFOO_200 (P-foo-CPIC0) on context with id Client-53-1; current state counter is 0
For a typical request the step is handled in milliseconds.
Are there any known limitations or configurations regarding pool handling for the Jco adapter, either on adapter or on SAP side?
Update we've on Jco adapter 3.0.16 and will double-check 3.0.17 now. DNS seems unlikely since we're monitoring dig/nslookup and they're running without delays.
Which JCo patch level do you use?
Did you try to update to the latest JCo patch level 3.0.17 first?
In your time gap the RFC connection will be opened and the RFC logon will be done, if the pool is empty at that time. Did you have a closer look with a higher trace level, or did you have a look into the RFC trace?
This can be anything from not having a free dialog work process at ABAP side, to SAP system database issues (required for the RFC logon authentication checks), slow response times from the SAP message server (if using load balanced logons), SNC handshake issues (if using SNC) or general network issues with the DNS (try using the IP address instead of a hostname).
Another point worth checking: you say your connection pool has size 100. Is it possible, that your program has more than 100 threads? Then it may happen from time to time, that all connections are currently busy in other threads and the current thread has to wait until a function call in another thread completes and a connection is returned to the pool.
(How long a thread waits on an empty pool can be customized via the "pool wait time" parameter.)

How to decide session time out ms in Kafka low level processor API

So suppose punctuate time is X min/sec
{
props.put("group.max.session.timeout.ms", X*2);
props.put("session.timeout.ms", x);
props.put("request.timeout.ms", X*2);
}
Is above is correct way to set session time out for Kafka streams low level processor API ?
group.max.session.timeout.ms is a broker setting (cf. http://kafka.apache.org/documentation/#brokerconfigs
for consumer setting within Streams, it's recommended to use prefix consumer.: props.put("consumer.session.timeout.ms", X)
you should set max.poll.interval.ms to Integer.MAX_VALUE (Streams will change the default value for this to Integer.MAX_VALUE, too)

Timeout of JMS Point-to-point requests in JMeter does not result in an error

We are using Apache JMeter 2.12 in order to measure the response time of our JMS queue. However, we would like to see how many of those requests take less than a certain time. This, according to the official site of JMeter (http://jmeter.apache.org/usermanual/component_reference.html) should be set by the Timeout property. You can see in the photo below how our configuration looks like:
However, setting the timeout does not result in an error after sending 100 requests. We can see that some of them take apparently more than that amount of time:
Is there some other setting I am missing or is there a way to achieve my goal?
Thanks!
The JMeter documentation for JMS Point-to-Point describes the timeout as
The timeout in milliseconds for the reply-messages. If a reply has not been received within the specified time, the specific testcase failes and the specific reply message received after the timeout is discarded. Default value is 2000 ms.
This is timing not the actual sending the message but receipt of a response.
The source for the JMeter Point to Point will determine if you have a 'Receive Queue' Configured. If you do it will go through the executor path and use the timeout value, otherwise it does not use time timeout value.
if (useTemporyQueue()) {
executor = new TemporaryQueueExecutor(session, sendQueue);
} else {
producer = session.createSender(sendQueue);
executor = new FixedQueueExecutor(producer, getTimeoutAsInt(), isUseReqMsgIdAsCorrelId());
}
In your screen shot JNDI name Receive Queue is not defined, thus it uses temporary queue, and does not use the timeout. Should or should not timeout be supported in this case, that is best discussed in JMeter forum.
Alternately if you want to see request times in percentiles/buckets please read this stack overflow Q/A -
I want to find out the percentage of HTTPS requests that take less than a second in JMeter

Resources