Handling long (> 5mins) requests from application deployed in WebLogic 12c - oracle

We have a problem at hand with a long service request to retrieve a huge amount of data which takes around 5 minutes to complete. We are using EJB and native JDBC to establish requests. Is there a way to extend the transaction timeout for this one particular request (that is overwriting the timeout configs in the domain's JTA) or do we have to increase the domain's JTA transaction timeout to 5 minutes? But the latter seems to be unfavorable since it might provoke database deadlock. Are there any other solutions you may suggested that is more robust and safe? Could we perhaps set the transaction timeout at a different level apart from Domain level? Looking forward to your reply soon. Thanks.

The JTA timeout can ba set at the EJB level. Read this documentation for details.

Related

Hikari JDBC Pool Monitoring in SpringBoot app with DataDog

We recently had a production outage where a 3rd party server was non-responsive. This occurred on a Spring Boot EC2 microservice pointing to a MySQL DB.
While waiting for these REST calls to complete, our service started throwing SpringBootJPAHikariCP - Connection is not available, request timed out after 30000ms. errors.
The thing is, we are pretty careful in avoiding keeping a DB transaction open when making a potentially high-latency REST call. Perhaps not careful enough?
We have DataDog monitoring enabled on this microservice and I am not seeing data that would help us diagnose the problem. Specifically, I would like some visibility in our MySQL db transactions to see where these are being left open longer than expected so we can refactor accordingly.
If I look at DD APM for the service, I can see the HTTP request latency stats, and JVM metrics which show the thread count creeping up. But no specific stats about transactions and connections from the pool.
If I look at DD APM for the DB, I can see the latency for queries, updates, etc, but it is all ordinary and nothing about transactions themselves.
Is it possible to configure spring and/or datadog for visualizing transaction time?

Spring Data Cassandra how to set finite number of connection retries?

I am currently implementing a spring boot microservice, which is persisting data to a single Cassandra database node. I need to be able to set the number of retries if the connection to the database is lost and the number of milliseconds between the retries in the microservice config file. I am using "spring-boot version 1.5.6" and spring-data-cassandra version 1.5.6". I was able to set the number of milliseconds between retries by creating cluster of type CassandraCqlClusterFactoryBean and passing a custom reconnection policy in the cluster.setReconnectionPolicy() method. But I am not able to set the number of retries with a custom retry policy. If understood correctly the retry policy handles only the case in which a query is made, but in my case I need to set the number of retries in all times no matter if a query is made or not. After a couple of days of research I was able to produce an ugly hack which basically uses a custom ReconnectionSchedule and stops the spring boot application after certain conditions are met in the nextDelayMs() method. Nevertheless I continued to look in the source code in debug mode and I saw that a NoHostAvailableException exception is thrown by the ControlConnection. So I checked the datastax official documentation regarding Control connection, and I found
Coming soon…
So could someone please show me how to correctly implement a way of stopping my cassandra driver of trying to reconnect to the node after a predefined number of retries.
Thanks in advance.
Look here at 9.3.1.
Maybe you can do something like trying to open a session each x second until a timeout expired or a session is successful created.

How to close idle connections in Spring JMS CachingConnectionFactory?

I used the Spring JMS cachingconnectionfactory to improve the performance of the my application based on Spring Integration and IBM MQ. I put sessioncachesize as 10 as we have the max of 10 concurrent threads working (ThreadPoolTaskExecutor) on consume/sending messages.
When I looked at the number of connections opened in MQ explorer (open output count for queue), it shows 10 and it stays on for days and never getting closed.
Is there a way to programatically to detect connections which are
potentially stale - say idle for half a day - I checked the
resetConnection() but not sure how to get the last used time for the
session.
Does Spring provides any connection time out parameter for
cacheconnection factory? or How to release these idle connections?
Also, the heartbeat/keepalive mechanism will not work for us as we want to physical close the cached connections based on last used time.
If the timeout is a property of the Session object returned by IBM, you could subclass the connection factory, override createSession(); call super.createSession(...) then set the property before returning it.
You might also have to override getSession(...) and keep calling it until you a get a session that is not closed. I don't see any logic to check the session state in the standard factory. (getSession() calls createSession() when the cache is empty).

How does Spring-JPA EntityManager handle "broken" connections?

I have an application that uses a Spring-EntityManager (JPA) and I wonder what happens if the database happens to be unavailable during the lifetime of my aforesaid application.
I expect in that situation it will throw an exception the first time to do anything on the database, right?
But, say I wait 10 minutes and try again then and the DB happens to be back. Will it recover? Can I arrange it so it does?
Thanks
Actually, neither Spring nor JPA have anything to do with it. Internally all persistence frameworks simply call DataSource.getConnection() and expect to receive (probably pooled) JDBC connection. Once they're done, they close() the connection effectively returning it to the pool.
Now when the DataSource is asked to give a connection but database is unaivalable it will throw an exception. That exception will propagate up and will be somehow handled by whatever framework you use.
Now to answer your question - typically DataSource implementation (like dbcp, c3p0, etc.) will discard connection known to be broken and replace it with a fresh one. It really depends on the provider, but you can safely assume that once the database is available again, the DataSource will gradually get rid of sick connections and replace them with healthy ones.
Also many DataSource implementors provide ways of testing the connection periodically and before it is returned to the client. This is important in pooled environemnts where the DataSource contains a pool of connections and when the database becomes unavailable it has no way to discover that. So some DataSources test connection (by calling SELECT 1 or similar) before giving it back to the client and do the same once in a while to get rid of broken connections, e.g. due to broken underlying TCP connection.
TL;DR
Yes, you will get an exception and yes the system will work normally once the database is back. BTW you can easily test this!

Cache values in Java EE

I'm building a simple message delegation application. Messages are being send on both ends via JMS. I'm using a MDB to process incoming messages, transform them and send them to a target queue. Unfortunately the same messages can be send to the incoming queue more than once but it is not allowed to forward duplicates.
So what is the best way to accomplish that?
Since there can be multiple MDBs listening on the incoming queue a need a single cache where I can store the unique message uuids of the incoming messages for at least an hour. How should this cache be accessed? Via a singleton/ static class (I'm running Java EE 5 and thus don't have the singleton annotation)?
In addition I think all operations must be synchronized, right? Does that harm performance too much?
#Ingo: are you OK with database solution. You can full fledged DB server or simple apache derby solution for this..
If so, you can have a simple table where you can store message unique UId and can check against it for uniqueness....this solution will have following benefits:
Simple code
No need of time bound cache(1 hour). You can check for uniqueness of a message forever.
Persistent record of what messages came in.
No need of expensive synchronized, you can rely on DB isolation level to have consistency.
centralized solution for your possibly many deployments of application.

Resources