Given that we're using eclipselink on oracle database:
1. we have some data from table A cached in JPA cache
2. we're calling a stored procedure, which modifies data in table A
Will be the JPA cache informed (through database event) that the data in table A changed (will it be invalidated)?
No, JPA is unaware of any changes made to the database outside of JPA queries, or through other persistence contexts, or even the same contexts on different JVMs. There are many ways to deal with this though, such as invalidating and managing the cache yourself:
https://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Expiration
Or registering EclipseLink with the database to listen for change events:
https://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/DatabaseEvents
You might be better of though just making the changes through JPA where ever possible.
Related
I am working on a Java 8 / Spring Boot 2 application and I have noticed that the security module of my app internally uses the findByEmail method of my UserRepostiory (which is a standard Spring Data JPA Repository). When I enabled Hibernate SQL logging, I discovered that these queries are performed multiple times within the same session (security uses it 3-4 times and then my business code uses it some more times). Each time the query hits the database.
This surprised me, as I expected it to be cached in the Hibernate's first level cache. After reading up about it a little bit more, I found out that the first level cache only caches the result of the findById query, not others.
Is there anyway that I can cache the result of the findByEmail query in the first level cache? (I don't want the cache to be shared between sessions, I don't want to use the 2nd level cache, as I think it should be invalidated right after the current session ends).
Yes, you can cache the results of a query on a unique property if you annotate the property with the #NaturalId annotation. If you then use the dedicated API to execute the query, the results will be stored in the 1st level cache. An example:
User user = entityManager
.unwrap(Session.class)
.bySimpleNaturalId(User.class)
.load("john#example.com");
In Hibernate Envers it is possible to have a separate audit table. Similarly, is it possible to log into tables other than the entity’s table using Spring Data JPA auditing?
The auditing feature of Spring Data JPA just fills attributes in the entity you are persisting. How and where these attributes get persisted is controlled by you JPA implementation and of course your database.
JPA offers #SecondaryTable to map fields to a second table.
If this isn't flexible enough for you, you can always employ database tools to achieve the effect by mapping the entity to a view which via triggers distributes the data however you want.
Currently I am using Weblogic with Oracle.
I have one instance of Oracle DB and two legacy schemas so I use tow datasources.
To keep transactionality I use XA but from time to time there are HeuristicExceptions thrown causing some inconsistency on data level
Now because it is the same instance is not possible somehow not to use XA and define a datasource that has access to both schemas ?
In this way i will not use XA anymore and avoid having data inconsitency.
Thanks
Do not use dblink. It is overkill. And also this might not be related to XA. Best solution is to use tables from both schemas from a single datasource. Either prefix tables in your queries by schema name, or create synonyms in one schema pointing onto tables in the other schema.
It is only matter database privileges. No need to deal with XA nor dblinks.
One db user need to have grants to manipulate tables in both schemas.
PS: you can use distributed transactions on connections pointing into the same database. If you insist on it. But in your case, there no need for that.
You can connect one schema and create a DBLink for the other to give access to the second. I think that transaction will work through both schema.
http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts004.htm
I would like to use an in memory db with hibernate, so my queries are super quick.
But moreover i would like to periodically persist that in memory state into a real mysql db.
Ofcourse the in memory database should load its initial content on startup from that mysql db.
Are there any good frameworks/practices for that purpose? (Im using spring) any tutorials or pointers will help.
I'll be honest with you, most decent databases can be considered in-memory to an extent given that they cache data and try not to hit the hard-disk as often as they can. In my experience the best in-memory databases are either caches, or alagamations of other data sources that are already persisted in some other form, and then are updated in a live fashion for time-critical information, or refreshed periodically for non-time-critical information.
Loading data from a cold start in to memory is potentially a lengthy process, but subsequent queries are going to be super-quick.
If you are trying to cache what's already persisted you can look at memcache, but in essence in memory databases always rely on a more persistent source, be it MySQL, SQLServer, Cassandra, MongoDB, you name it.
So it's a little unclear what you're trying to achieve, suffice to say it is possible to bring data in from persistent databases and have a massive in memory cache, but you need to design around how stale certain data can get, and how often you need to hit the real source for up-to-the-second results.
Actually the simplest would be to use some core Hibernate features for that, use the hibernate Session itself and combine it with the second level cache.
Declare the entities you want to cache as #Cacheable:
#Entity
#Cacheable
#Cache(usage = CacheConcurrencyStrategy.NON_STRICT_READ_WRITE)
public class SomeReferenceData { ... }
Then implement the periodically flushing like this, supposing you are using JPA:
open an EntityManager
load the entities you want to cache using that entity manager and no other
Keep the entity manager opened until the next periodic flush, Hibernate is keeping track what instances of SomeReferenceData where modified in-memory via it's dirty checking mechanism, but no modification queries are being issued.
Reads on the database are being prevented via the second level cache
When the moment comes to flush the session, just begin a transaction and commit immediately.
Hibernate will update modified entities in the database, update the second level cache and resume execution
eventually close the entity manager and replace it with a new one, if you want to reload from the database eveything
otherwise keep the same entity manager open
code example:
Try this code to see the overall idea:
public class PeriodicDBSynchronizeTest {
#Test
public void testSynch() {
// create the entity manager, and keep it
EntityManagerFactory factory = Persistence.createEntityManagerFactory("testModel");
EntityManager em = factory.createEntityManager();
// kept in memory due to #Cacheable
SomeReferenceData ref1 = em.find(SomeReferenceData.class, 1L);
SomeReferenceData ref2 = em.find(SomeReferenceData.class, 2L);
SomeReferenceData ref3 = em.find(SomeReferenceData.class, 3L);
....
// modification are tracked but not committed
ref1.setCode("005");
// these two lines will flush the modifications into the database
em.getTransaction().begin();
em.getTransaction().commit();
// continue using the ref data, and tracking modifications until the next request
...
}
}
I am using OpenJPA. If I want to do a mass delete/update using the executeUpdate() method, will the JPA cache be updated? Or will this bypass the JPA cache? When I say "cache", I am talking about both the L1 and L2 caches. Does the type of query matter (native vs. JPQL)? Thank you.
The documentation says:
The persistence context is not synchronized with the result of the
bulk update or delete.
Caution should be used when executing bulk update or delete operations
because they may result in inconsistencies between the database and
the entities in the active persistence context. In general, bulk
update and delete operations should only be performed within a
transaction in a new persistence context or at the beginning of a
transaction (before entities have been accessed whose state might be
affected by such operations).
So, since OpenJPA doesn't synchronize the L1 cache, I don't see why it would (and how it could) synchronize the L2 cache. He could flush it, but I doubt it. It's easy enough to test anyway.