What's the difference between infinispan local cache and distributed cache. I know that in local cache no cluster can be formed whereas in distributed cache we can form the cluster. My doubt is whether the distributed cache data will store in our running application heap or external to the application. Also, whether it is possible to obtain the data stored in infinispan running in one jvm by application running in another jvm.
Infinispan stores the data in the application heap. It also can persist the data if you configure a CacheStore (for example a database). You have the details here: Infinispan Persistence
About the second question: yes, Infinispan know where the data is stored and fetches from other nodes (or jvms) if no copy is available local. If you need more details, you can check the documentation: Infinispan Clustering
Related
We plan to use Ehcache in our application that resides in a cluster. However we do not want to use Terracotta SA, what are the other options for cache replication across all the servers in cluster?
The quick answer is: No.
With Ehcache 2 you could, but it wasn't really reliable. With Ehcache 3, the clustered is done using a terracotta server. This has always been the most reliable way.
Note that the terracotta server is open source.
I am trying to deploy a spring-security server, with Redis as token store.
In order to have some redundancy in Redis, we want to deploy it as a cluster.
The problem is Jedis, which is used by spring security as underlying library, doesn't support pipelining in cluster mode, but spring security uses pipelining.
My question is how can I solve this situation. More precisely:
1- Should I use another mode of deployment form Redis? What actually works.
2- Can I somehow force spring security to use reddison for connecting to Resid?
Please adivse.
If you want redundancy, use replication (master/slave) not cluster.
If you have more data than RAM on a machine, use cluster.
If you have more data than RAM on a machine and want redundancy, use cluster with replication.
Jedis supports replication with sentinel, so give that a go unless you have a lot of data. Some more info on usage here: https://github.com/xetorthio/jedis/issues/725
We have a Weblogic cluster with set of managed servers where we have already deployed our enterprise application which heavily involved in database transactions. Recently we have migrated to Weblogic 12c and we have a proposal in pipeline to use Oracle Coherence to enable a caching layer so that we can improve application performance.
After doing a research we found that Managed Coherence Servers (MCS) feature in WLS 12c supported by creating additional storage enabled cluster which we can be used to deploy the GAR.
However increasing memeory in the production set up in costly to create a new cluster hence before proposing it I want to know whether Oracle Coherence feature can be used without creating a new cluster but without compromising MCS features?
You can create Oracle Coherence caches in the same cluster where application EAR is deployed. Follow this link on how to create the caches Pack GAR with Application EAR
But it always make sense to have your caching layer separate from your application layer. As caches also use the Heap Memory to store/cache objects. When your application and the cache are deployed in the same server (same JVM), they will have to share the heap space among themselves. And as far as i know, there is no clear way to define separate zones in heap memory for different application running on same JVM.
Hence there is a high probability for any of the application to consume more heap and make the other to starve and ultimately causing Out of Memory exceptions. Plus capacity planning for Coherence caches will also be a nightmare as you have to consider the memory consumed by the application too.
I took an old web application recently. The App will save a lot of object into Ehcache at init time.And then obtain the object from the cache. And now, the object has increased many times. Ehcache in the app can not meet the requirement. So, we consider using a distributed cache. We will set some strategies and let the objects save in different cache server.
Redis and Hazelcast are both good. Question is, the Redis and Hazelcast compared to the previous Ehcache, Redis and Hazelcast must Serialize objects. May be it will consume more time.
So which is better?
Or are there no other better alternatives?
Thanks in advance.
What are your requirements? Hazelcast has pluggable serialization and there are serialization libraries like FlatBuffers that are exceedingly fast if needed.
Hazelcast is ideal if you are Java centric, of course you can also access Hazelcast via .Net CLR languages if you are using Hazelcast Enterprise.
Hazelcast is more than just a simple distributed cache, you can also leverage the distributed CPUs in your cluster and execute processes directly in the grid, which can result in very high performance.
If you are multi-language and dont need to execute code on top of distributed data (in-memory executor service, mapreduce or entryprocessors) Redis would make sense.
Have a look at Redisson - Redis based In-Memory Data Grid is a good alternative for Ehcache. Provides a lot data serialization codecs like:
Jackson JSON, Avro, Smile, CBOR, MsgPack, Amazon Ion, Kryo, JDK Serialization, LZ4 and Snappy.
It also allows to store Map data in external storage along with Redis store using Read-through, Write-through and Write-behind strategies.
Offers distributed MapReduce and Executor/Scheduler services.
I am using EhCache framework to cache application data and thinking to use JGroups cache replication to replicate cache in a clustered environment.
Is it really an industry standard for cache replication in clustered environment? Or, there can be other better options that I should think about. Please notice that I am not using any centralized cache server at this point of time. I have already done POC on JGroups Cache Replication. Could you please share your experience in terms of its robustness and major concerns? What are the pros cons of using JGroups for cache replication?
I am using Jgroups for clustering various application nodes. We have our own cache implementation which uses the underlying Jgroups for data replication/distribution. So far it is working fine without issues.
Could you please check 'infinispan'. It's a distributed cache which uses Jgroups for handling cluster.