Cache Isolation Level Warning on Parent Entity - caching

After adding a second persistence unit and changing my application's data sources to XADataSource (MySQL), I'm now getting a confusing warning in the glassfish log about isolation levels on my parent entity:
WARN o.e.p.s.f.j.ejb_or_metadata : Parent Entity BaseEntity has an isolation
level of: PROTECTED which is more protective then the subclass Contact with
isolation: null so the subclass has been set to the isolation level PROTECTED.
After some research, I think that this isolation level warning message is coming from EclipseLink's caching mechanism. But I am not specifying an isolation level anywhere in my app, so it appears that something in my configuration has triggered the BaseEntity class to have an isolation level of 'PROTECTED'. The documentation is silent on what might cause it to be automatically assigned to that level -- see user guide.
Minor testing with a single user has shown that the application seems to work as expected, but this warning message doesn't make me feel comfortable rolling it out to the masses.
Can anyone shed some light into this message? Are my concerns valid?

The cache implementation here is just trying to sync the isolation level of parnet and child entity. But i think you should override the default protective isolation level. Because "Serializeable" isolation level is the one which is most protective and has poor performance. You can use Read Committed or Repeatable Read levels, depending upon your requirements.

This is just a warning about cache isolation, it has nothing to due with database isolation, so you can just ignore it.
For more info on cache isolation see,
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Shared_and_Isolated
It is odd if you have not done any caching configuration though. By default everything should be SHARED, to get something as PROTECTED you must have disabled aching for a related entity, such as using #Cacheable(false)?

After some research, I discovered that this warning had nothing to do with using the XADataSource. I had earlier began some exploration into EclipseLink's Multitenancy, and it turned out that this was the culprit.
Referring to http://wiki.eclipse.org/EclipseLink/Examples/JPA/Multitenant#Persistence_Usage_for_Multiple_Tenants:
When using this architecture there is a shared cache available for regular entity types but the Multitenant types must be PROTECTED in the cache so the MULTITENANT_SHARED_EMF property must be set to true.
FYI -- In reviewing the code, there are 3 other cases in ClassDescriptor.initializeCaching() in which the cache isolation is downgraded to PROTECTED:
If the entity has a DatabaseMapping marking it as non-cacheable.
If the entity has a ForeignReferenceMapping that doesn't have an isolation level of shared.
If the entity has a AggregateObjectMapping that doesn't have an isolation level of shared.

Related

What happens when spring transaction isolation level conflicts with database transaction isolation level?

As I know database transaction isolation level is a prior, or spring can override it?
If database level has priority what are the cases to use spring isolation configuration?
There is no such separation as a "database transaction isolation level" and a "spring transaction isolation level".
A DB might implement the isolation levels defined by the SQL standard and a client that starts a transaction might request a specific level of isolation for it.
There are a couple of things to note that however do not present any contradiction:
A DB usually has a default isolation level that is used if a client does not explicitly request a specific level for a transaction. Say, in PostgreSQL the default one is Read Committed and in MySQL it's Repeatable Read.
A DB might not implement all of the isolation levels or have some specifics in their implementation. E.g. Oracle DB does not support the Read Uncommitted and Repeatable Read isolation levels and PostgreSQL's Read Uncommitted mode behaves like Read Committed.
With Spring, when you specify an isolation level either via the #Transactional(isolation = ...) annotation or TransactionTemplate#setIsolationLevel() it makes the JDBC driver issue an SQL command to set the desired level for the current session.
E.g. Oracle JDBC driver will do ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED for Read Committed.
If an unsupported level is specified it'll throw an exception.
Refs:
https://www.postgresql.org/docs/current/transaction-iso.html
https://docs.oracle.com/cd/E25054_01/server.1111/e25789/consist.htm#CNCPT1312

Spring Cache to Disable Cache by cacheName configuration

I am using spring boot, and it's very easy to integrate spring cache with other cache component.
By caching data, we can use #Cachable annotation, but still we need configure and add cacheName to the cacheManager, without this step, we will get an exception while accessing the method:
java.lang.IllegalArgumentException: Cannot find cache named 'xxxx' for Builder
My question is, is that able to disable the cache instead of raising the error if we not configure the cacheName? I raised this because spring cache provide a configuration spring.cache.cacheNames in CacheProperties.
Not sure if the condition attribute in #Cachable works for this.
Any idea is appreciate!! Thanks in advance!
It really depends on your "caching provider" and the implementation of the CacheManager interface, in particular. Since Spring's Cache Abstraction is just that, an "abstraction" it allows you to plugin different providers and backend data stores to support the caches required by your application (i.e. as determined by Spring's caching annotations, or alternatively, the JSR-107-JCache annotations; see here).
For instance, if you were to use the Spring Framework's provided ConcurrentMapCacheManager implementation (not recommended for production except for really simple UCs), then if you choose to not declare your caches at configuration/initialization time (using the default, no-arg constructor) then the "Caches" are lazily created. However, if you do declare your "Caches" at configuration/initialization time (using the constructor accepting cache name arguments), then if your application uses a cache (e.g. #Cacheable("NonExistingCache")) not explicitly declared, then Exception would be thrown because the getCache(name:String):Cache method would return null and the CacheInterceptor initialization logic would throw an IllegalArgumentException for no Cache available for the caching operation (follow from the CacheIntercepter down, here, here, here, here and then here).
There is no way to disable this initialization check (i.e. throw Exception) for non-existing caches, currently. The best you can do is, like the ConcurrentMapCacheManager implementation, lazily create Caches. However, this heavily depends on your caching provider implementation. Obviously, some cache providers are more sophisticated than others and creating a Cache on the fly (i.e. lazily) is perhaps more expensive and costly, so is not supported by the caching provider, or not recommended.
Still, you could work around this limitation by wrapping any CacheManager implementation (of your choice), and delegate to the underlying implementation for "existing" Caches and "safely" handle "non-existing" Caches by treating it as a Cache miss simply by providing some simple wrapper implementations of the core Spring CacheManager and Cache interfaces.
Here is an example integration test class that demonstrates your current problem. Note the test/assertions for non-existing Caches.
Then, here is an example integration test class that demonstrates how to effectively disable caching for non-existing Caches (not provided by the caching provider). Once again, note the test/assertions for safely accessing non-existing Caches.
This is made possible by the wrapper delegate for CacheManager (which wraps and delegates to an existing caching provider, which in this case is just the ConcurrentMapCacheManager again (see here), but would work for any caching provider supported by Spring Cache Abstraction) along with the NoOpNamedCache implementation of the Spring Cache interface. This no-op Cache instance could be a Singleton and reused for all non-existing Caches if you did not care about the name. But, it would give you a degree of visibility into which "named" Caches are not configured with an actual Cache since this most likely will have an impact on your services (i.e. service methods without caching enabled because the "named" cache does not exist).
Anyway, this may not be one you exactly want, and I would even caution you to take special care if you pushed this to production since (I'd argue) it really ought to fail fast for missing Caches, but this does achieve what you want.
Clearly, it is configurable and you could make it conditional based on cache name or other criteria, in that, if your really don't care or don't want caching on certain service methods in certain contexts, then it is up to you and this approach is flexible and completely give you that choice, if needed.
Hope this gives you some ideas.

How expensive are transactions in Grails?

I'm looking at performance issues with a Grails application, and the suggestion is to remove the transactions from the services.
Is there a way that I can measure the change in the service?
Is there a place that has data on how expensive transactions are? [Time and resource-wise]
If someone told you that removing transactions from your services was a good way to help performance, you should not listen to any future advice from that person. You should look at the time spent in transactions and determine what the real overhead is, and find methods and entire services that are run in transactions but don't need to be and fix those to be nontransactional. But removing all transactions would be irresponsible.
You would be intentionally adding sporadic errors in method return values and making your data inconsistent, and this will get worse when you have a lot of traffic. A somewhat faster but buggy app or web site is not going to be popular, and if this doesn't help performance (or not much) then you still have to do the real work of finding the bottlenecks, missing indexes, and other things that are genuinely causing problems.
I would remove all #Transactional annotations and database writes from all controllers though; not for performance reasons, but to keep the application tiers sensible and not polluted with unrelated code and logic.
If you find one or more service methods that don't require transactions, switch to annotating each transactional method as needed but omit the annotation at class scope so un-annotated methods inherit nothing and aren't transactional. You could also move those methods to non-transactional services.
Note that services are only non-transactional if there are no #Transactional annotations and there is a transactional property disabling the feature:
static transactional = false
If you don't have that property and have no annotations, it will look like it's ok, but transactional defaults to true if not specified.
There's also something else that can help a lot (and already does). The dataSource bean is actually a proxy of a proxy - one proxy returns the connection from the pool that's a being used by an open Hibernate session or transaction so you can see uncommitted data and do your queries and updates in the same connection. The other is more related to your question: org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy which has been in Spring for years but only used in Grails since 2.3. It helps with methods that start or participate in a transaction but do no database work. For the case of a single method call that unnecessarily starts and commits an 'empty' transaction, the overhead involved includes getting the pooled connection, then calling set autocommit false, setting the transaction isolation level, etc. All of these are small costs but they add up. The class works by giving you a proxied connection that caches these method calls, and only gets a real connection and invokes these method on it when a query is actually run. If there are no queries and the only calls are those transaction-related setup methods, there's basically no cost at all. You shouldn't rely on this and should be intentional with the use of #Transactional annotations, but if you miss one this pool proxy will help avoid unnecessary work.

Stateful session bean passivation with non-serializable EntityManager?

I have just read Why stateful and local anti-facades are KISS by Adam Bien where he suggests using a SFSB with an EntityManager to keep entities attached during the whole client interaction.
Does't this fail not in a clustered environment as mentioned in a comment but also whenever the SFSB should be passivated by the container?
If I'm right what kind of solution would you suggest? I thought to minimize the number of layers in the application it would be useful to bind the SFSBs to conversation scope and then reference them directly in my JSF views.
In general, having a stateful based architecture is counter scalable.
I have been working with EJB 3 SLSBs for over 5 years now in multiple projects and never had a real issue with merging entities.
If you want to you can decouple your client layer from your persistence layer by adding a layer of DTOs. This way you can design your entity model according to what's best for the business/persistence layer and your DTOs according to the way your client wants to present the data.
If you want to use the same objects you can still do that, you should only pay attention to which objects are "in the session" and which are detached, and you won't have any merge issues.

ibatis / mybatis caching within a restful webservice

I am using mybatis within a Jax-RS (Jersey) restful web app. So automatically, I dont have a session or state management.
Question is how can I use the caching features of mybatis ?
Caching in MyBatis is very straight forward. Per the documentation (pg 42 of the user manual http://mybatis.googlecode.com/svn/trunk/doc/en/MyBatis-3-User-Guide.pdf)
By default, there is no caching enabled, except for local session caching, which improves performance and is required to resolve circular dependencies. To enable a second level of caching, you simply need to add one line to your SQL Mapping file:
MyBatis 3 - User Guide
6 June 2011 43
<cache/>
Literally that’s it.
The common pitfalls I had while doing this:
On the mapper you add the cache element to; if you have dependent entities, make sure to explicitly flush the cache when required. Even though flushing is already done for you on insert, update, delete for the elements in the mappings you have set the cache element, sometimes you have to flush a cache due to updates/deletes/etc defined in different xml mappings.
Basically, when you're thinking about your caching, you should ask yourself, "When this entity is changed, do I want it to flush a cache for an entity in a different mapping?" If the answer is yes, use cache-ref element as opposed to just cache.
Ex from page 45 of the doc:
<cache-ref namespace=”com.someone.application.data.SomeMapper”/>

Resources