Spring Integration is hanging WebSphere Liberty thread - spring

I have a Spring + Spring Integration + Hibernate WebApp deployed in a WebSphere Liberty Application Server.
Sometimes, when i try to Stop the application, the server goes down.
I see this in the Log:
[12/16/15 9:27:27:146 CET] 00000096 webapp I com.ibm.ws.webcontainer.webapp.WebApp log SRVE0292I: Servlet Message - [CATAPP#web-0.0.1-SNAPSHOT.war]:.Destroying Spring FrameworkServlet 'IntegrationContext'
[12/16/15 9:39:36:112 CET] 000000f1 ThreadMonitor W WSVR0605W: Thread "Default : 2" (00000096) has been active for 729034 milliseconds and may be hung. There is/are 1 thread(s) in total in the server that may be hung.
And there is no more info. I need to restart the WebSphere node to start the application again.
I know it's difficult, but somebody knows what may be the problem ? Thanks.

When you hit this situation, take a thread dump (jstack, jvisualvm etc) and look to see what the thread is doing. For example, it might be reading from a socket with a long (or no) timeout. If you can't figure it out, post the thread dump someplace.

Related

Spring Boot application won't quit on 'eb deploy' since I added embedded ActiveMQ

I have a Spring Boot application hosted on AWS Elastic Beanstalk. Ever since I included embedded ActiveMQ the application won't quit on redeploy - I am getting an error about the port 5000 already in use when it is trying to start the newly deployed jar.
The only workaround I found is to recreate the environment after each re-deploy, which means a long downtime.
I am suspecting a timing problem with the shutdown hook.
When I Ctrl-C the application in local, it quits after a several seconds delay, with some exceptions:
javax.jms.JMSException: peer (vm://embedded#1) stopped.
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:54) ~[activemq-client-5.15.10.jar:5.15.10]
...
Caused by: org.apache.activemq.transport.TransportDisposedIOException: peer (vm://embedded#1) stopped.
My brokerUrl is set to vm://embedded?broker.persistent=false,useShutdownHook=false, althought jConsole shows Broker/Embedded/Attributes/Persistent is true.
Any hints?

Jersey Rx client jersey-client-async-executor shutdown

The web application [ROOT] appears to have started a thread named [jersey-client-async-executor-0] but has failed to stop it.
How to gracefully shutdown the jersey-client-async-executor?
This is used with Spring Boot, JerseyRxClient with embedded tomcat.
Jersey Gracefully terminates the thread once the request is complete.
I am using spring boot scheduler to make asynchronous request using rx jersey client.
I had the same doubt as everytime a scheduler runs, jersey client creates new threads incrementing the number.
To be sure that the thread is terminated,
Simple Test:
In your subscriber,
Set<Thread> threadSet = Thread.getAllStackTraces().keySet();
this will not list the jersey-client-async-executor-* which was used to make the request.

Webapp hangs when Active MQ broker is running

I got a strange problem with my spring webapp (running on local jetty) which connects to a locally running ActiveMQ broker for JMS functionality.
As soon as I start the broker the applications becomes incredibly slow, e.g. the startup of the ApplicationContext with active broker takes forever (i.e. > 10mins, did not yet wait long enough for it to complete). If I start the broker after the webapp (i.e. after the ApplicationContext was loaded) it's running but in a very very slow way (requests which usually take <1s take >30s). All operations take longer even the ones without JMS involved. When I run the application without an activemq broker everything runs smoothly (except the JMS related stuff of course ;-) )
Here's what I tried so far:
Updated the ActiveMQ Version to 5.10.1
Used standalone ActiveMQ instead of maven-plugin
moved the broker running from a separate JVM (via active mq maven plugin, connection via JNDI lookup in jetty config) into the same JVM (started via spring config, without JNDI)
changed the active mq transport from tcp to vm
several activemq settings (alwaysSyncSend, alwaysSessionAsync, producerWindowSize)
Using CachingConnectionFactory and PooledConnectionFactory
When analyzing a thread dump (jstack) I see many activemq threads sleeping on a monitor. Which looks like this:
"ActiveMQ VMTransport: vm://localhost#0-3" daemon prio=6 tid=0x000000000b1a3000 nid=0x1840 waiting on condition [0x00000000177df000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000f786d670> (a java.util.concurrent.SynchronousQueue$TransferStack)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
at java.lang.Thread.run(Thread.java:662)
Any help is greatly appreciated !
I found the cause of the issue and was able to fix it:
we were passing a transactionmanager to the AbstractMessageListenerContainer. While in production there is a XA-Transactionmanager in use on the local jetty environment only a JPATransactionManager is used. Apparently the JMS is waiting forever for an XA transaction to be commited, which never happens in the local environment.
By overriding the bean definition of the AbstractMessageListenerContainer for the local env without setting a transcationmanager but using sessionTransacted="true" instead everything works fine.
I got the idea that it might be related to transaction handling from enabling the ActiveMQ logging. With this I saw that something was wrong with the transaction (transactionContext.getTransactionId() returned null).

HTTP Connection pool

I need to set up an HTTP connection pool in a Spring app on a Tomcat server.
We are debating whether to define the pool at the application or at the server level (applicationContext.xml vs server.xml).
My problem is: I've looked and I've looked, but I just can't find any info on doing either.
For now, I'm working with org.apache.http.impl.conn.PoolingClientConnectionManager inside my class, and it's working ok.
How would I be able to define a pool outside my Java code and work with it from there?
Here is the configuration reference you are looking for for tomcat 7:
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Standard_Implementation
Here is also another SO post on the same subject: How to increase number of threads in tomcat thread pool?

WebSphere JNDI lookup in non managed threads

I have a Java EE application (actually, it is an apache camel application) deployed on WebSphere Application Server 7.
My application consumes service requests from Web Services (threads started from the servlet container in WAS) and from JMS queues (not SI-BUS, but WebSphere MQ if that matters). For the JMS listener, Camel (or the underlying spring framework perhaps) initiates own threads (seems to be simple java threads more or less) to deal with JMS requests.
I also have a transactional Database attached to the application. So, in spring, I have something like this definied to grab a transaction manager (WebSphere built in JTA probably).
<tx:annotation-driven/>
So my problem is, that I get an error like this when a Camel/JMS is triggering an event in the application:
org.apache.openjpa.persistence.PersistenceException: TransactionManager not found in JNDI under name java:comp/websphere/ExtendedJTATransaction
Seems like threads not initiated by the container itself cannot do JNDI lookups correct. Is there a way around this issue?

Resources