How to close idle connections in Spring JMS CachingConnectionFactory? - spring

I used the Spring JMS cachingconnectionfactory to improve the performance of the my application based on Spring Integration and IBM MQ. I put sessioncachesize as 10 as we have the max of 10 concurrent threads working (ThreadPoolTaskExecutor) on consume/sending messages.
When I looked at the number of connections opened in MQ explorer (open output count for queue), it shows 10 and it stays on for days and never getting closed.
Is there a way to programatically to detect connections which are
potentially stale - say idle for half a day - I checked the
resetConnection() but not sure how to get the last used time for the
session.
Does Spring provides any connection time out parameter for
cacheconnection factory? or How to release these idle connections?
Also, the heartbeat/keepalive mechanism will not work for us as we want to physical close the cached connections based on last used time.

If the timeout is a property of the Session object returned by IBM, you could subclass the connection factory, override createSession(); call super.createSession(...) then set the property before returning it.
You might also have to override getSession(...) and keep calling it until you a get a session that is not closed. I don't see any logic to check the session state in the standard factory. (getSession() calls createSession() when the cache is empty).

Related

MQ CachingConnectionFactory SessionCache not working

I have some limitation that I can only have 1 partition of kafka topic which I can listen and still need to improve throughput of message processing and send that message MQ in end during that processing
So once I receive message in kafkaListener, I have used Asycn for further processing (that is for storing message in Db and later post to MQ).
Issue I see Session cache is not working , as once I read from kafkalistner it opens up new thread to do further work and once it reaches to JMS send method, after certain point I end with MQ error "max connections reach channel capacity" I thin MQRC 2537
Not sure what would be issue and I am using com.ibm.mq /mq-jms-spring-boot-starter as dependency
I have set Session cache to 20 and Async to 30 , does this means it will still try to create 10 more JMS connection, if all 30 threads task comes at almost same time
My understanding to SessionCache is that , only that many max session connection will be created only ..and out of 30 ..10 threads need to wait for JMS session to be available
Please assist , we are using Spring boot
I think the problem here is that you end up with too many connections being opened. All the pooling is done in the background for you by spring-jms, and it appears that your pool size exceeds the number of connections that the channel is configured for.
You will need to apply a throttle, by reducing the connection pool. There are caching properties that you can set for mq-jms-spring-boot-starter. See https://github.com/ibm-messaging/mq-jms-spring
eg.
ibm.mq.pool.enabled=true
ibm.mq.pool.maxConnections=5
ibm.mq.pool.blockIfFull=true
ibm.mq.pool.blockIfFullTimeout=60
You will need to determine sensible values for your envrionment.

Handling long (> 5mins) requests from application deployed in WebLogic 12c

We have a problem at hand with a long service request to retrieve a huge amount of data which takes around 5 minutes to complete. We are using EJB and native JDBC to establish requests. Is there a way to extend the transaction timeout for this one particular request (that is overwriting the timeout configs in the domain's JTA) or do we have to increase the domain's JTA transaction timeout to 5 minutes? But the latter seems to be unfavorable since it might provoke database deadlock. Are there any other solutions you may suggested that is more robust and safe? Could we perhaps set the transaction timeout at a different level apart from Domain level? Looking forward to your reply soon. Thanks.
The JTA timeout can ba set at the EJB level. Read this documentation for details.

IBM Websphere MQ Session Lifetime

What are the best practices regarding sessions in an application that is designed to fetch messages from a MQ server every 5 seconds?
Should I keep one session open for the whole time (could be weeks or longer), or better open a session, fetch the messages, and then close the session again?
I am using the .net IBM XMS v8 client library.
Adding to what #Attila Repasi's response, I would go for a consumer with message listener attached. The message listener would get called whenever a message needs to be delivered to application. This avoids application explicitly calling receive() to retrieve messages from queue and waste CPU cycles if there are no messages on the queue.
Check the XMS.NET best practices
Keep the connection and session open for a longer period if your application sends or receive message continuously. Creation of connection or session is a time consuming operation and consumes lot of resources and involves network flow (for client connections).
I'm not sure what you are calling a session, but typically applications connect to the queue manager serving them once at start, and keep that connection up while running.
I don't see a reason to disconnect just to reconnect 5 seconds later.
As for keeping the queues open, it depends on your environment.
If there are no special circumstances, I would keep the queue open.
I think the most worth thinking about is how you issue the GETs to read the messages.

Release connection into c3p0 connection pool

I am facing a problem, actually I am using c3p0 for connection pooling in my project it also uses Spring,Hibernate and JSF. My problem is that in my web page we have a link named "logout" I want when user click on the logout the connection should be released and www.google.com back to the pool how is it possible.
Thanks in advance
Prashant
what you are expecting is to control the number of concurrent users logging into your system
when the 3rd user try to login then he should wait for the connection
to free
Now, you can implement this using a concurrent counter
create a filter which filters all requests.
whenever a new request is created increment the counter
when a user logs out decrement the counter
when the counter hits max value make that thread wait until the slots are available.
you can control the max number of users via JMX or a seperate admin console.
also, a connection should be released when the thread handling it terminates (since the session object doesn't have any references it can be GC'd and after a timeout it will be reused in the pool).
its always better to not create a bottleneck using a DB resource.

How does Spring-JPA EntityManager handle "broken" connections?

I have an application that uses a Spring-EntityManager (JPA) and I wonder what happens if the database happens to be unavailable during the lifetime of my aforesaid application.
I expect in that situation it will throw an exception the first time to do anything on the database, right?
But, say I wait 10 minutes and try again then and the DB happens to be back. Will it recover? Can I arrange it so it does?
Thanks
Actually, neither Spring nor JPA have anything to do with it. Internally all persistence frameworks simply call DataSource.getConnection() and expect to receive (probably pooled) JDBC connection. Once they're done, they close() the connection effectively returning it to the pool.
Now when the DataSource is asked to give a connection but database is unaivalable it will throw an exception. That exception will propagate up and will be somehow handled by whatever framework you use.
Now to answer your question - typically DataSource implementation (like dbcp, c3p0, etc.) will discard connection known to be broken and replace it with a fresh one. It really depends on the provider, but you can safely assume that once the database is available again, the DataSource will gradually get rid of sick connections and replace them with healthy ones.
Also many DataSource implementors provide ways of testing the connection periodically and before it is returned to the client. This is important in pooled environemnts where the DataSource contains a pool of connections and when the database becomes unavailable it has no way to discover that. So some DataSources test connection (by calling SELECT 1 or similar) before giving it back to the client and do the same once in a while to get rid of broken connections, e.g. due to broken underlying TCP connection.
TL;DR
Yes, you will get an exception and yes the system will work normally once the database is back. BTW you can easily test this!

Resources