I have a very specific scenario that, during the execution of a query, specifically during the fetching rows from db to my resultset, I get an OutOfMemoryError.
The code is simple as it:
public interface MyRepository extends Repository<MyEntity, Long> {
#EntityGraph(value = "MyBigEntityGraphToFetchAllCollections", type = EntityGraphType.FETCH)
#QueryHints({#QueryHint(name = "org.hibernate.readOnly", value = "true")})
MyEntity getOneById(Long id);
}
public class MyService {
...
public void someMethodCalledInLoop(Long id) {
try{
return repository.getOneById(id);
} catch (OutOfMemoryError error) {
// Here the connection is closed. How to reset Hikaricp?
System.gc();
return null;
}
}
}
Seems weird a getOne consumes all the memory, but due to eager fetching about 80 collections and due to multiplication of rows, some cases are insupportable.
I know I have the option to lazely load the collections, but I don't want to. Hit database 1+N times on every load consumes more time and my application dont have it. Its a batch processing of milions of records and less than 0,001% has this impact in memory. So my strategy is just discard this few records and process the next ones.
Just after catch the OutOfMemoryError the memory is freed, the trouble entity turns garbage. But due to this Error, HikariCP closes (or is forced to) the connection.
In the next call of the method, hikaricp still gives me a closed connection. Seems due to memory lack hikaricp doesn't finished correctly the previous transaction and sticks in this state forever.
My intention, now, is to reset or recovery hikaricp. I don't need to care about other threads using the pool.
So, after all, my simple question is, how to programatically restart or recover hikarycp to its primary state, without reboot the application.
Thanks, a lot, for who read this.
Try adding this to your Hibernate configuration:
<property name="hibernate.hikari.connectionTestQuery">select 1</property>
This way HikariCP will test that the connection is still alive before giving it to Hibernate.
Nothing has worked so far.
I minimized the problem by adding a 'query hint' to the method:
#QueryHints({#QueryHint(name = "org.hibernate.timeout", value = "10")})
MyEntity getOneById(Long id);
99% of the resultsets are fetched in 1 or less second, but sometimes the resultset is so big that takes longer. This way the JDBC stops the result fetching before the memory gets compromised.
Related
I currently have a Spring Boot based application where there is no active cache. Our application is heavily dependent on key-value configurations which we maintain in an Oracle DB. Currently, without cache, each time I want to get any value from that table, it is a database call. This is, expectedly causing a lot of overhead due to high number of transactions to the DB. Hence, the need for cache arrived.
On searching for caching solutions for SpringBoot, I mostly found links where we are caching object while any CRUD operation is performed via the application code itself, using annotations like #Cacheable, #CachePut, #CacheEvict, etc. but this is not applicable for me. I have a master data of key-value pairs in the DB, any change needs approvals and hence the access is not directly provided to the user, it is made once approved directly in the DB.
I want to have these said key-values to be loaded at startup time and kept in the memory, so I tried to implement the same using #PostConstruct and ConcurrentHashMap class, something like this:
public ConcurrentHashMap<String, String> cacheMap = new ConcurrentHashMap<>();
#PostConstruct
public void initialiseCacheMap() {
List<MyEntity> list = myRepository.findAll();
for(int i = 0; i < list.size(); i++) {
cacheMap.put(list.get(i).getKey(), list.get(i).getValue());
}
}
In my service class, whenever I want to get something, I am first checking if the data is available in the map, if not I am checking the DB.
My purpose is getting fulfilled and I am able to drastically improve the performance of the application. A certain set of transactions were earlier taking 6.28 seconds to complete, which are now completed in mere 562 milliseconds! however, there is just one problem which I am not able to figure out:
#PostConstruct is called once by Spring, on startup, post dependency injection. Which means, I have no means to re-trigger the cache build without restart or application downtime, this is not acceptable unfortunately. Further, as of now, I do not have the liberty to use any existing caching frameworks or libraries like ehcache or Redis.
How can I achieve periodic refreshing of this cache (let's say every 30 minutes?) with only plain old Java/Spring classes/libraries?
Thanks in advance for any ideas!
You can do this several ways, but how you can also achieve this is by doing something in the direction of:
private const val everyThrityMinute = "0 0/30 * * * ?"
#Component
class TheAmazingPreloader {
#Scheduled(cron = everyThrityMinute)
#EventListener(ApplicationReadyEvent::class)
fun refreshCachedEntries() {
// the preloading happens here
}
}
Then you have the preloading bits when the application has started, and also the refreshing mechanism in place that triggers, say, every 30 minutes.
You will require to add the annotation on some #Configuration-class or the #SpringBootApplication-class:
#EnableScheduling
I'm working with SpringBoot 2.3.5 with Hibernate and Hikari Pool.
I've an entity, lets call it A, this entity has an "execution status counter" long field, incremented by an Async method, at the end of its executions, so I can split the execution on multiple threads, having a progress counter.
The increment is performed like this:
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Lock(LockModeType.PESSIMISTIC_WRITE)
default void incrementStatusCount(String lotto, Long progressivo) {
A a = findById(...).get();
a.setStatoLdSN(a.getStatoLdSN() != null ? a.getStatoLdSN()+1:1L);
saveAndFlush(a);
}
And it works fine, so externally, using a DB Tool, I can see the counter update.
Now, at the end of my whole execution, I change the string status of my whole execution, so I load the entity, but the entity is already in the cache, so I call a refresh on the entity manager, I can see the query performad and also the retrieved values from the Hibernate log....and my counter 0.
Now, Oracle default isolation leve is READ_COMMITTED (and I verified it on the connection), and my incremented value is committed because I saw it using the DB client, no?
So why JPA, not even calling refresh is loading the right value?
When I read this tutorial about transaction, I notice timeout property, which I have never used before in any of REST services I have developed.
For example, in this code:
#Service
#Transactional(
isolation = Isolation.READ_COMMITTED,
propagation = Propagation.SUPPORTS,
readOnly = false,
timeout = 30)
public class CarService {
#Autowired
private CarRepository carRepository;
#Transactional(
rollbackFor = IllegalArgumentException.class,
noRollbackFor = EntityExistsException.class,
rollbackForClassName = "IllegalArgumentException",
noRollbackForClassName = "EntityExistsException")
public Car save(Car car) {
return carRepository.save(car);
}
}
What is the benefit or advantage of using timeout property? is it a good practice to use it? can anyone tell me about use-cases of timeout property?
As Spring Docs explain:
Timeout enables client to control how long the transaction runs before timing out and being rolled back automatically by the
underlying transaction infrastructure.
So, the benefit is evidently obvious - to control how long the transaction (and queries under that) may be lasting, until they're rolled back.
Q: Why controlling the transaction time is useful/good?
A: If you are deliberately expecting your transaction not to take too long - it's a good time to use this configuration; if you're expecting that your transaction might take longer than its default maximum time, it is, agian, helpful to provide this configuration.
One is to stop records being locked for long and unable to serve any other requests.
Let says you are booking a ticket. On the final submission page, it is talking so long and will your user wait forever? So you set http client time out. But now you have the http client time out, what happens if you don't have transaction time out? You displayed error to user saying it didn't succeed but your transaction takes it time as it does not have any timeout and commits after the your http client has timed out.
All of the above answers are correct, but something you should note is that:
this property exclusively designed for use with Propagation.REQUIRED
or Propagation.REQUIRES_NEW since it only applies to newly started
transactions.
as documentations describes.
My problem is straightforward. I want to access some data from the database when the application loads on Tomcat. To do something at that point in time I use #PostConstruct (which does its job properly).
However, in that method I make 2 separate connections to the DB: one for bringing a list of entities and another for adding them into a common library. The second step implies some behind-the-scenes queries for resolving some lazy-loading associations. Here is the code snippet:
#Override
#PostConstruct
public void populateLibrary() {
// query for the Book Descriptors - 1st query works!!!
List<BookDescriptor> bookDescriptors= bookDescriptorService.list();
Session session = sessionFactory.openSession();
Transaction transaction = null;
try {
transaction = session.beginTransaction();
// resolving some lazy-loading associations - 2nd query fails!!!
for (BookDescriptor book: bookDescriptors) {
library.addEntry(book);
}
transaction.commit();
} catch (HibernateException e) {
transaction.rollback();
e.printStackTrace();
} finally {
session.close();
}
}
1st query works while the 2nd fails, as I wrote in the comments. The failure gives:
org.hibernate.LazyInitializationException: could not initialize proxy - no Session
at org.hibernate.proxy.AbstractLazyInitializer.initialize(AbstractLazyInitializer.java:86)
at org.hibernate.proxy.AbstractLazyInitializer.getImplementation(AbstractLazyInitializer.java:140)
at org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer.invoke(JavassistLazyInitializer.java:190)
at com.freightgate.domain.SecurityFiling_$$_javassist_7.getSfSubmissionType(SecurityFiling_$$_javassist_7.java)
at com.freightgate.dao.SecurityFilingTest.test(SecurityFilingTest.java:73)
Which is very odd since I explicitly opened and closed a transaction. However, if I inspect some details of how the 1st query works it seems like behind the scenes the session is bound to AbstractLazyInitializer class.
I resolved my problem by abstracting away the functionality from the for loop into a separate service class that is annotated with #Transactional(readOnly = true). Still I'm puzzled as to why the approch that I posted here fails.
If anyone has some hints, I'd be very happy to hear them.
You load entities in a first session, then close this session, then open a new session, and try to lazy-load collections of the entities. That can't work.
For lazy-loading to work, the entity must be attached to an open session. Just opening another session doesn't make any entity you have loaded before attached to this new session. In the meantime, some other transaction could have radically changed the database, the entity could not exist anymore...
The best solution is what you have done. Encapsulate evrything into a single transactional service. You could also have open the transaction before calling the first service, but why handle transactions programmatically, since Spring does it for you declaratively?
We have our JBoss and Oracle on separate servers. The connections seem to be dropped and is causing issues with JBoss. How can I have the JBoss reconnect to Oracle if the connection is bad while we figure out why the connections are being dropped in the first place?
Whilst you can use the old "select 1 from dual" trick, the downside with this is that it issues an extra query each and every time you borrow a connection from the pool. For high volumes, this is wasteful.
JBoss provides a special connection validator which should be used for Oracle:
<valid-connection-checker-class-name>
org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker
</valid-connection-checker-class-name>
This makes use of the proprietary ping() method on the Oracle JDBC Connection class, and uses the driver's underlying networking code to determine if the connection is still alive.
However, it's still wasteful to run this each and every time a connection is borrowed, so you may want to use the facility where a background thread checks the connections in the pool, and silently discards the dead ones. This is much more efficient, but means that if the connections do go dead, any attempt to use them before the background thread runs its check will fail.
See the wiki docs for how to configure the background checking (look for background-validation-millis).
There is usually a configuration option on the pool to enable a validation query to be executed on borrow. If the validation query executes successfully, the pool will return that connection. If the query does not execute successfully, the pool will create a new connection.
The JBoss Wiki documents the various attributes of the pool.
<check-valid-connection-sql>select 1 from dual</check-valid-connection-sql>
Seems like it should do the trick.
Not enough rep for a comment, so it's in a form of an answer. The 'Select 1 from dual' and skaffman's org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker method are equivalent , although the connection check does provide a level of abstraction. We had to decompile the oracle jdbc drivers for a troubleshooting exercise and Oracle's internal implementation of the ping is to perform a 'Select 'x' from dual'. Natch.
JBoss provides 2 ways to Validate connection:
- Ping based AND
- Query based
You can use as per requirement. This is scheduled by separate thread as per duration defined in datasource configuration file.
<background-validation>true</background-validation> <background-validation-minutes>1</background-validation-minutes>
Some time if you are not having right oracle driver at Jboss, you may get classcast or related error and for that connection may start dropout from connection pool. You can try creating your own ConnectionValidator class by implementing org.jboss.resource.adapter.jdbc.ValidConnectionChecker interface. This interface provides only single method 'isValidConnection()' and expecting 'NULL' in return for valid connection.
Ex:
public class OracleValidConnectionChecker implements ValidConnectionChecker, Serializable {
private Method ping;
// The timeout (apparently the timeout is ignored?)
private static Object[] params = new Object[] { new Integer(5000) };
public SQLException isValidConnection(Connection c) {
try {
Integer status = (Integer) ping.invoke(c, params);
if (status.intValue() < 0) {
return new SQLException("pingDatabase failed status=" + status);
}
}
catch (Exception e) {
log.warn("Unexpected error in pingDatabase", e);
}
// OK
return null;
}
}
A little update to #skaffman's answer. In JBoss 7 you have to use "class-name" attribute when setting valid connection checker and also package is different:
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker" />
We've recently had some floating request handling failures caused by orphaned oracle DBMS_LOCK session locks that retained indefinitely in client-side connection pool.
So here is a solution that forces session expiry in 30 minutes but doesn't affect application's operation:
<check-valid-connection-sql>select case when 30/60/24 > sysdate-LOGON_TIME then 1 else 1/0 end
from V$SESSION where AUDSID = userenv('SESSIONID')</check-valid-connection-sql>
This may involve some slow down in process of obtaining connections from pool. Make sure to test this under load.