Spring 4.3 Framework: Deadlock on ConcurrentHashMap AbstractBeanFactory.doGetBean - spring

Scenario - Looks like this is timing issue. We have application lock (Lock#1) before calling getBean(), then there comes Spring framework ConcurrentHashMap (Lock#2). Threads are getting blocked on Lock#1 and Lock#2. Please find below snippets of thread dumps to prove the use case.
T1 - (Acquired Lock#1 and waiting for Lock#2)
"Catalina-utility-1" #85 prio=1 os_prio=0 tid=0x00007f9918034000 nid=0x33f0 waiting for monitor entry [0x00007f97f1ccd000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:213)
- waiting to lock <0x00000004df51aef8> (a java.util.concurrent.ConcurrentHashMap) (-> At this place, waiting for Spring lock, Lock#2. Here it has already acquired Lock#1, and waiting for Lock#2 )
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1082)
at com.bmc.unifiedadmin.service.ABCServiceBeanContext.getService(ABCServiceBeanContext.java:53)
at com.bmc.unifiedadmin.service.ABCServicesFactory.getService(ABCServicesFactory.java:52) (**-> At this place, application semaphore have been acquired, Lock#1**)
T2 - (Acquired Lock#2 and waiting for Lock#1)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) -> (**At this place, waiting for TSPS lock, Lock#1. Here it has already acquired Lock#2, and waiting for Lock#2**)
—
–
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:142)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:89)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1151)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:511)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
- locked <0x00000004df51aef8> (a java.util.concurrent.ConcurrentHashMap) (**-> At this place, Lock#2 has been acquired**)

Related

JMeter Threads Getting blocked after sometime in test

Using JMeter 5.4.1 and JDK 11 in Non-GUI mode, running at 40 Thread at 5 requests/sec.
All Listeners Off
Disabled all assertions, relying on application logs
Groovy used as scripting language in JSR223, using to emulate Pacing between requests
Heap Size increased to 12 GB
After around 40 mins or sometime 70 mins, JMeter stops generating load as seen on Web Server logs. I ran Visual VM on JMeter machine, hooked into JMeter process and took a Thread Dump.
Majority of my threads are in Monitor State
Log as below:
'''
"Script11 - GetRequest 11-1" #61 prio=5 os_prio=0 cpu=13390.63ms elapsed=4266.73s tid=0x000001d1599d9800 nid=0xee0 waiting for monitor entry [0x000000ca924fe000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.PrintStream.println(java.base#11.0.11/PrintStream.java:881)
- waiting to lock <0x0000000500857638> (a java.io.PrintStream)
at org.apache.jmeter.reporters.Summariser.formatAndWriteToLog(Summariser.java:329)
at org.apache.jmeter.reporters.Summariser.sampleOccurred(Summariser.java:208)
at org.apache.jmeter.threads.ListenerNotifier.notifyListeners(ListenerNotifier.java:58)
at org.apache.jmeter.threads.JMeterThread.notifyListeners(JMeterThread.java:1024)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:579)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
at java.lang.Thread.run(java.base#11.0.11/Thread.java:834)
Locked ownable synchronizers:
- None
"0x0000000500857638" is Locked on
"Script09 - GetRequest9 9-2" #55 prio=5 os_prio=0 cpu=13562.50ms elapsed=4266.74s tid=0x000001d1599d5000 nid=0x13e0 runnable [0x000000ca91efe000]
java.lang.Thread.State: RUNNABLE
at java.io.FileOutputStream.writeBytes(java.base#11.0.11/Native Method)
at java.io.FileOutputStream.write(java.base#11.0.11/FileOutputStream.java:354)
at java.io.BufferedOutputStream.flushBuffer(java.base#11.0.11/BufferedOutputStream.java:81)
at java.io.BufferedOutputStream.flush(java.base#11.0.11/BufferedOutputStream.java:142)
- locked <0x0000000500857660> (a java.io.BufferedOutputStream)
at java.io.PrintStream.write(java.base#11.0.11/PrintStream.java:561)
- locked <0x0000000500857638> (a java.io.PrintStream)
at sun.nio.cs.StreamEncoder.writeBytes(java.base#11.0.11/StreamEncoder.java:233)
at sun.nio.cs.StreamEncoder.implFlushBuffer(java.base#11.0.11/StreamEncoder.java:312)
at sun.nio.cs.StreamEncoder.flushBuffer(java.base#11.0.11/StreamEncoder.java:104)
- locked <0x00000005008577b8> (a java.io.OutputStreamWriter)
at java.io.OutputStreamWriter.flushBuffer(java.base#11.0.11/OutputStreamWriter.java:181)
at java.io.PrintStream.write(java.base#11.0.11/PrintStream.java:606)
- locked <0x0000000500857638> (a java.io.PrintStream)
at java.io.PrintStream.print(java.base#11.0.11/PrintStream.java:745)
at java.io.PrintStream.println(java.base#11.0.11/PrintStream.java:882)
- locked <0x0000000500857638> (a java.io.PrintStream)
at org.apache.jmeter.reporters.Summariser.formatAndWriteToLog(Summariser.java:329)
at org.apache.jmeter.reporters.Summariser.sampleOccurred(Summariser.java:208)
at org.apache.jmeter.threads.ListenerNotifier.notifyListeners(ListenerNotifier.java:58)
at org.apache.jmeter.threads.JMeterThread.notifyListeners(JMeterThread.java:1024)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:579)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
at java.lang.Thread.run(java.base#11.0.11/Thread.java:834)
Locked ownable synchronizers:
- None
Heap Utilization is around 1.5GB, allocation is 12 GB
CPU Utilization on host machine is around 15%
Any suggestions on how to avoid the JMeter threads getting blocked ?
Thanks.

Can't connect to Bigtable to scan HTable data due to hardcoded managed=true in hbase client jars

I'm working on a custom load function to load data from Bigtable using Pig on Dataproc. I compile my java code using the following list of jar files I grabbed from Dataproc. When I run the following Pig script, it fails when it tries to establish a connection with Bigtable.
Error message is:
Bigtable does not support managed connections.
Questions:
Is there a work around for this problem?
Is this a known issue and is there a plan to fix or adjust?
Is there a different way of implementing multi scans as a load function for Pig that will work with Bigtable?
Details:
Jar files:
hadoop-common-2.7.3.jar
hbase-client-1.2.2.jar
hbase-common-1.2.2.jar
hbase-protocol-1.2.2.jar
hbase-server-1.2.2.jar
pig-0.16.0-core-h2.jar
Here's a simple Pig script using my custom load function:
%default gte '2017-03-23T18:00Z'
%default lt '2017-03-23T18:05Z'
%default SHARD_FIRST '00'
%default SHARD_LAST '25'
%default GTE_SHARD '$gte\_$SHARD_FIRST'
%default LT_SHARD '$lt\_$SHARD_LAST'
raw = LOAD 'hbase://events_sessions'
USING com.eduboom.pig.load.HBaseMultiScanLoader('$GTE_SHARD', '$LT_SHARD', 'event:*')
AS (es_key:chararray, event_array);
DUMP raw;
My custom load function HBaseMultiScanLoader creates a list of Scan objects to perform multiple scans on different ranges of data in the table events_sessions determined by the time range between gte and lt and sharded by SHARD_FIRST through SHARD_LAST.
HBaseMultiScanLoader extends org.apache.pig.LoadFunc so it can be used in the Pig script as load function.
When Pig runs my script, it calls LoadFunc.getInputFormat().
My implementation of getInputFormat() returns an instance of my custom class MultiScanTableInputFormat which extends org.apache.hadoop.mapreduce.InputFormat.
MultiScanTableInputFormat initializes org.apache.hadoop.hbase.client.HTable object to initialize the connection to the table.
Digging into the hbase-client source code, I see that org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal() calls org.apache.hadoop.hbase.client.ConnectionManager.createConnection() with the attribute “managed” hardcoded to “true”.
You can see from the stack track below that my code (MultiScanTableInputFormat) tries to initialize an HTable object which invokes getConnectionInternal() which does not provide an option to set managed to false.
Going down the stack trace, you will get to AbstractBigtableConnection that will not accept managed=true and therefore cause the connection to Bigtable to fail.
Here’s the stack trace showing the error:
2017-03-24 23:06:44,890 [JobControl] ERROR com.turner.hbase.mapreduce.MultiScanTableInputFormat - java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:431)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:424)
at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:302)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:185)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:151)
at com.eduboom.hbase.mapreduce.MultiScanTableInputFormat.setConf(Unknown Source)
at com.eduboom.pig.load.HBaseMultiScanLoader.getInputFormat(Unknown Source)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:264)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
... 26 more
Caused by: java.lang.IllegalArgumentException: Bigtable does not support managed connections.
at org.apache.hadoop.hbase.client.AbstractBigtableConnection.<init>(AbstractBigtableConnection.java:123)
at com.google.cloud.bigtable.hbase1_2.BigtableConnection.<init>(BigtableConnection.java:55)
... 31 more
The original problem was caused by the use of outdated and deprecated hbase client jars and classes.
I updated my code to use the newest hbase client jars provided by Google and the original problem was fixed.
I still get stuck with some ZK issue that I still did not figure out, but that's a conversation for a different question.
This one is answered!
I have confronted the same error message:
Bigtable does not support managed connections.
However, according to my research, the root cause is that the class HTable can not be constructed explicitly. After changed the way to construct HTable by connection.getTable. The problem resolved.

Reduce code is randomly getting stuck when inserting data in postgres

We have a map reduce code written in Java which reads multiple small files (say 10k+) converts to a single avro file in driver, reducer inserts a bunch of reduced records to postgres database. This process happens every hour. But there are multiple map reduce jobs running simultaneously, processing different avro files and opening a different database connection per job. So sometimes (very random) it happens that all the tasks are stuck in reducer phase with following exception -
"C2 CompilerThread0" daemon prio=10 tid=0x00007f78701ae000 nid=0x6db5 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Signal Dispatcher" daemon prio=10 tid=0x00007f78701ab800 nid=0x6db4 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Surrogate Locker Thread (Concurrent GC)" daemon prio=10 tid=0x00007f78701a1800 nid=0x6db3 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Finalizer" daemon prio=10 tid=0x00007f787018a800 nid=0x6db2 in Object.wait() [0x00007f7847941000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000006e5d34418> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x00000006e5d34418> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"Reference Handler" daemon prio=10 tid=0x00007f7870181000 nid=0x6db1 in Object.wait() [0x00007f7847a42000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000006e5d32b50> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:503)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
- locked <0x00000006e5d32b50> (a java.lang.ref.Reference$Lock)
"main" prio=10 tid=0x00007f7870013800 nid=0x6da1 runnable [0x00007f7877a7b000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:143)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:112)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:71)
at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:269)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1700)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
- locked <0x00000006e5d34520> (a org.postgresql.core.v3.QueryExecutorImpl)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:302)
at ComputeReducer.setup(ComputeReducer.java:299)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:162)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:610)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:444)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
"VM Thread" prio=10 tid=0x00007f787017e800 nid=0x6db0 runnable
"Gang worker#0 (Parallel GC Threads)" prio=10 tid=0x00007f7870024800 nid=0x6da2 runnable
"Gang worker#1 (Parallel GC Threads)" prio=10 tid=0x00007f7870026800 nid=0x6da3 runnable
After this exception occurs we have to restart the database else all the reduce jobs seat idle stuck around 70% and even next hour jobs cannot run. Initially it used to exhaust number of open connections but after increasing the connections to considerably high number such is not the case. I should point that I am no database expert so please suggest any configuration changes that might help. Just to confirm does this seem to be database configuration issue? If yes then would configuring connection pooling over postgres help resolve this?
Any help/ suggestions are highly appreciated! Thanks in advance.
My initial thought would be that if it is random, it is probably a lock. There are two areas to look for locks:
Locks between threads on shared resources and locks on database objects.
I don't see anything in your stack trace to suggest that it is a database lock issue but this could be caused by not closing transactions so you don't get a deadlock but you are waiting on inserts.
More likely you have a deadlock in your Java code, perhaps the two waiting threads are waiting on eachother?
I want to add my findings,
After refactoring the code it worked fine for couple months then this problem recurred, we thought it was a hadoop cluster problem so a small fresh hadoop cluster was created but that didn't solve the problem either. So finally, we looked at our largest database table it had more than 1.5 billion rows and select query was taking a lot of time so after getting rid of old data from this table, full vacuum and reindex helped.

what could cause 'Read timed out' in spring redis after only a couple reads?

I have a simple RedisStringTemplate which is throwing SocketTimeoutExceptions after only a couple reads of one key. I haven't set any timeout in any config, so it is using the default. This is happening in a junit run under SpringJUnit4ClassRunner, if that matters.
If I run just one test case, which does several reads and a couple of updates, it works fine.
But if I run the whole test class, which has a couple test cases which read the value, including some set up/cleanup code, which read and update the value, I get these 'Read timed out'.
To do the read, we simply do
myRedisStringTemplate.opsForValue().get(key);
to update this key, we do
myRedisStringTemplate.opsForValue().set(key, valueForKey);
In the scenario with the errors, I believe there is only one update on this key, to set the initial value. Then a couple reads on that key before the 'Read timed out' starts, in the #Before before the next test method, where we're trying to clean up data from the previous test.
This is within a Spring application.
Here's the stack trace :
java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
||||||c.i.q.c.MeteredAssignmentsCacheController:73
16:25:32.090|E| java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
com.mycompany.rest.exception.RestException: java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
at com.mycompany.myclass...
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:710)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
at com.mycompany.spring.metrics.MetricsAspect.advice_Timed(MetricsAspect.java:85)
at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610)
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:68)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:168)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at com.ryantenney.metrics.spring.TimedMethodInterceptor.invoke(TimedMethodInterceptor.java:46)
at com.ryantenney.metrics.spring.TimedMethodInterceptor.invoke(TimedMethodInterceptor.java:32)
at com.ryantenney.metrics.spring.AbstractMetricMethodInterceptor.invoke(AbstractMetricMethodInterceptor.java:58)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:643)
at com.mycompany.myclass...
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:214)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:690)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:885)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:653)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:728)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.springframework.data.redis.RedisConnectionFailureException: java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:46)
at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:36)
at org.springframework.data.redis.connection.jedis.JedisConverters.toDataAccessException(JedisConverters.java:116)
at org.springframework.data.redis.connection.jedis.JedisConnection.convertJedisAccessException(JedisConnection.java:168)
at org.springframework.data.redis.connection.jedis.JedisConnection.get(JedisConnection.java:962)
at org.springframework.data.redis.connection.DefaultStringRedisConnection.get(DefaultStringRedisConnection.java:264)
at org.springframework.data.redis.core.DefaultValueOperations$1.inRedis(DefaultValueOperations.java:45)
at org.springframework.data.redis.core.AbstractOperations$ValueDeserializingRedisCallback.doInRedis(AbstractOperations.java:50)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:181)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:149)
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:84)
at org.springframework.data.redis.core.DefaultValueOperations.get(DefaultValueOperations.java:42)
From point of view of redis, this is highly unlikely occasion. So you most likely have mistakes in your Spring or Redis configurations.
For example I can assume, that you specified timeout in redis configuration
http://redis.io/topics/clients
By default recent versions of Redis don't close the
connection with the client if the client is idle for many seconds: the
connection will remain open forever. However if you don't like this
behavior, you can configure a timeout, so that if the client is idle
for more than the specified number of seconds, the client connection
will be closed.
So if you're using connection pool for Redis in Spring, some of your connections would timeout, at some point, but would still remain in the pool.
Can you check what timeouts you currently have in your Redis configuration?

Weblogic Stuck Thread on JDBC call

We frequently get a series of Stuck threads on our Weblogic servers. I've analyzed this over a period of time.
What I'd like to understand is whether this stuck thread block indicates it is still reading data from the open socket to the database since the queries are simple SELECT stuff?
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at oracle.net.ns.Packet.receive(Packet.java:239)
at oracle.net.ns.DataPacket.receive(DataPacket.java:92)
We've run netstat and other commands, the sockets from the Weblogic app server to the Database match the number of connections in the pool.
Any ideas what else we should be investigating here?
Stack trace of thread dump:
"[STUCK] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=10 tid=0x61a5b000 nid=0x25f runnable [0x6147b000..0x6147eeb0]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at oracle.net.ns.Packet.receive(Packet.java:239)
at oracle.net.ns.DataPacket.receive(DataPacket.java:92)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:172)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:117)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:92)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:77)
at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1023)
at oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:999)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:584)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:183)
at oracle.jdbc.driver.T4CStatement.fetch(T4CStatement.java:1000)
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)
- locked <0x774546e0> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)
- locked <0x774546e0> (a oracle.jdbc.driver.T4CConnection)
at weblogic.jdbc.wrapper.ResultSet_oracle_jdbc_driver_OracleResultSetImpl.next(Unknown Source)
The bit starting from weblogic.work.ExecuteThread.run to here has been omitted. We have 8 sets of thread dumps - and each show the thread waiting on the same line, and the same object locked
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)
- locked <0x774546e0> (a oracle.jdbc.driver.T4CConnection)
At the time the stack was printed, it seems blocked waiting for more data from the server
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)
Maybe it is just the query which is taking more than StuckThreadMaxTimeand WL issues a Warning.
If possible I would try:
Find which query or queries are getting the threads stuck and check execution time
Use Wireshark to analyze communication with database
Have a look at the driver source code (JD comes to mind) to understand stack trace
if you use weblogic debug flag -Dweblogic.debug.DebugJDBCSQL you will be able to trace the SQL which is actually being executed

Resources