IgnoreSizeOf annotation when using ehcache with terracotta - ehcache

I am using ehcache with terracotta in my application. My response time increased by 700 folds when i am using ehcache with terracotta. I think the terracotta is taking time in measuring the size of objects as it giving me warning:
net.sf.ehcache.pool.sizeof.ObjectGraphWalker checkMaxDepth
WARNING:
The configured limit of 1,000 object references was reached while
attempting to calculate the size of the object graph. Severe
performance degradation could occur if the sizing operation continues.
This can be avoided by setting the CacheManger or Cache
elements maxDepthExceededBehavior to "abort" or adding stop points
with #IgnoreSizeOf annotations. If performance degradation is NOT an
issue at the configured limit, raise the limit value using the
CacheManager or Cache elements maxDepth attribute. For
more information, see the Ehcache configuration documentation.
When i used #IgnoreSizeOf annotation on my class, the response time reduced to a lot . My question is does using #IgnoreSizeOf annotation has any disadvantages. For what it is being used and how it is reducing the response time of my application Please help.
Thanks in advance.

This annotation isn't related to Terracotta clustering. I guess you posted that other question on this subject.
The #IgnoreSizeOf annotation will have the sizeOfEngine, that measures the memory footprint of entries in your caches, ignore instances of annotated classes (or entire packages) or subgraphs (annotated fields) of your cached entries.
So if an object graph that you cache has a "shared" subgraph, you'd annotate the field where that graph starts with the annotation. If you ignore everything then nothing will be sized and the maxBytesLocalHeap setting has no semantic (you'll eventually suffer from OOME).
You need to understand the object graphs you're caching in order to use the annotation properly. See http://ehcache.org/documentation/configuration/cache-size#built-in-sizing-computation-and-enforcement for more details.
Now, to the performance issue you're seeing, you might want to test with and without the maxBytesLocalHeap setting and with and without clustering to try to pin point your problem. But I suspect you might be caching more than you expect, resulting in big men footprint, as well as overhead clustering the data...

Related

Which caching mechanism to use in my spring application in below scenarios

We are using Spring boot application with Maria DB database. We are getting data from difference services and storing in our database. And while calling other service we need to fetch data from db (based on mapping) and call the service.
So to avoid database hit, we want to cache all mapping data in cache and use it to retrieve data and call service API.
So our ask is - Add data in Cache when it gets created in database (could add up-to millions records) and remove from cache when status of one of column value is "xyz" (for example) or based on eviction policy.
Should we use in-memory cache using Hazelcast/ehCache or Redis/Couch base?
Please suggest.
Thanks
I mostly agree with Rick in terms of don't build it until you need it, however it is important these days to think early of where this caching layer would fit later and how to integrate it (for example using interfaces). Adding it into a non-prepared system is always possible but much more expensive (in terms of hours) and complicated.
Ok to the actual question; disclaimer: Hazelcast employee
In general for caching Hazelcast, ehcache, Redis and others are all good candidates. The first question you want to ask yourself though is, "can I hold all necessary records in the memory of a single machine. Especially in terms for ehcache you get replication (all machines hold all information) which means every single node needs to keep them in memory. Depending on the size you want to cache, maybe not optimal. In this case Hazelcast might be the better option as we partition data in a cluster and optimize the access to a single network hop which minimal overhead over network latency.
Second question would be around serialization. Do you want to store information in a highly optimized serialization (which needs code to transform to human readable) or do you want to store as JSON?
Third question is about the number of clients and threads that'll access the data storage. Obviously a local cache like ehcache is always the fastest option, for the tradeoff of lots and lots of memory. Apart from that the most important fact is the treading model the in-memory store uses. It's either multithreaded and nicely scaling or a single-thread concept which becomes a bottleneck when you exhaust this thread. It is to overcome with more processes but it's a workaround to utilize todays systems to the fullest.
In more general terms, each of your mentioned systems would do the job. The best tool however should be selected by a POC / prototype and your real world use case. The important bit is real world, as a single thread behaves amazing under low pressure (obviously way faster) but when exhausted will become a major bottleneck (again obviously delaying responses).
I hope this helps a bit since, at least to me, every answer like "yes we are the best option" would be an immediate no-go for the person who said it.
Build InnoDB with the memcached Plugin
https://dev.mysql.com/doc/refman/5.7/en/innodb-memcached.html

How does EhCache3 handle eviction when cache is full?

Eviction policies seem to be removed in EhCache3. There is an EvictionAdvisor interface that can be implemented, but what would be the default behaviour?
I am using EhCache3 with SpringBoot.
The exact eviction algorithm depends on the tier configuration. As such Ehcache does not detail it explicitly to enable tweaking it in the future.
Eviction advisors are an opt-in way of saying that some elements should really remain in the cache over others. It effectively means they are not considered by the eviction algorithm unless no eviction candidates can be found.
Note that it is an advanced feature that can have quite severe performance effect if you end up advising against eviction a large portion of the cache.
And as said in another answer, there is no default - that is by default all entries are treated equal and subject to the internal eviction algorithm
The default is NO_ADVICE
From the javadoc :
Returns an {#link EvictionAdvisor} where no mappings are advised
against eviction.

Caching In Laravel5.2

I want to use Redis as cache in my project, so as we know that redis store data in the memory, absolutely there are limitations on that, how long the data will persist on memory ? Do I want to implement some algorithms in that(least recently used for example) ?
There is no need of implementing algorithms explicitly. Redis comes with built in eviction policies. You can configure one of them. http://redis.io/topics/lru-cache
Redis support expiring keys after a certain time range. Suppose you need the cache only for 4 hours you can implement this. http://redis.io/commands/expire
Redis does compression for data within a range. You can implement all you hashes, sorted sets in such a way that it can hold a lot of data in a lesser memory space. http://redis.io/topics/memory-optimization
Go through all these docs, you will get a better idea on implementing. Hope this helps.

Fine-tuning and monitoring a Spring cache backed by a ConcurrentMapCache

I have set up a Spring cache manager backed by a ConcurrentMapCache for my application.
I am seeking for ways to monitor the cache and especially make sure the data in cache fits in memory. I considered using jvisualvm for that purpose but there might be other ways... If so what are they?
So my question is basically twofold:
What is the best way to monitor a cache backed by a ConcurrentMapCache?
What are the general guidelines for setting the time to live and cache size values of a cache?
It looks like you are searching for cache related features that can't and won't be available with a "simple" map implementation provided by the JVM.
There are many cache providers out there that provides what you want, that is monitoring, limiting the size of the cache and providing a TTL contract for the cache elements. I would encourage you to look around and switch your CacheManager implementation which will have zero impact on your code since you're using the abstraction.

How to deal with Java EE concurrency

Please let me know the best practices of providing application concurrency in software project. I would like to use Hibernate for ORM. Spring to manage the transactions. Database as MySQL.
My concurrency requirement is to let as much as users to connect to the database and
make CRUD operations and use the services. But I do not like to have stale data.
How to handle data concurrency issues in DB.
How to handle application concurrency. What if two threads access my object simultaneously will it corrupt the state of my object?
What are the best practices.
Do you recommend to define isolation levels in spring methods
#Transactional(isolation=Isolation.READ_COMMITTED). How to take that decision.?
I have come up with below items would like to know your feedback and the way to address?
a. How to handle data concurrency issues in DB.
Use Version Tag and Timestamp.
b. How to handle application concurrency.
Provide optimistic locking. Not using synchronizations but create objects for each requests (Scope prototype).
c. What are the best practices.
cache objects whenever its possible.
By using transactions, and #Version fields in your entities if needed.
Spring beans are singletons by default, and are thus shared by threads. But they're usually stateless, thus inherently thread-safe. Hibernate entities shouldn't be shared between threads, and aren't unless you explicitely do it by yourself: each transaction, running in its own thread, will have its own Hibernate session, loading its own entity instances.
much too broad to be answered.
The default isolation level of your database is usually what you want. It is READ_COMMITTED with most databases I know about.
Regarding your point c (cache objects whenever possible), I would say that this is exactly what you shouldn't do. Caching makes your app stateful, difficult to cluster, much more complex, and you'll have to deal with the staleness of the cache. Don't cache anything until
you have a performance problem
you can't tweak your algorithms and queries to solve the performance problem
you have proven that caching will solve the performance problem
you have proven that caching won't cause problems worse than the performance problem
Databases are fast, and already cache data in memory for you.

Resources