Ehcache cache the data with support cluster disk storage (replication) - ehcache

I am using Ehcache technologies in our application and it is working in single server data persist in disk, if we moving to production, we have two different server which is clustered in our application.
if first request comes to server A, it will cache respect server A OS disk level cached data and working fine, similarly, if request goes to server B, the application can not find the cached data because the the cached disk object in server A disk. how to we replicate both disk in our ehcache-config.xml?

I recommend looking into clustering support with Terracotta.
See the documentation for this:
Ehcache 3.x line
Ehcache 2.x line

Related

How to sync different servers to GORM layer with cache enabled in Grails

I do have three instances of Grails app running in three different tomcat servers, sharing same DB (Oracle). I've enabled some of my domain classes with cache : true.
How can I sync all of my server cache while updating DB via one server.
How can I sync all of my server cache while updating DB via one
server.
You would need a shared distributed cache. Ehcache would be one option (https://grails-plugins.github.io/grails-cache-ehcache/).

Which distributed database I need to choose for medium data project

Now we have java project with PostgreSQL database on spring boot 2 with Spring Data JPA (Hibernate).
Requirements to new architecture:
On N computers we have workplace. Each workplace use the same program with different configuration (configured client for redistributed database).
Computers count is not big - amount 10/20 PCs. Database must be scalable (a lot of data can be stored at the disk ~1/2 Tb).
Every day up to 1 million rows can be inserted into database from one workplace.
Each workplace works with redistributed database - it means, that each node must be able to read/write data, modified by each other. And make some decision based on data, modified by another workplace at runtime(Transactional).
Datastore(disk database archive) must be able to archived and copied as backup snapshot.
Project must be portable to new architecture with Spring Data JPA 2 and database backups with liquibase. Works on windows/ Linux.
The quick overview shows me that the most popular redistributed FREE database at now are:
1) Redis
2) Apache Ignite
3) Hazelcast
I need help in understanding way to architect described system.
First of all, I'm tried to use redis and ignite. Redis start easily - but it works like simple IMDG(in memory data grid). But I need to store all the data in persistent database(at disk, like ignite persistence). There is a way to use redis with existing PostgreSQL database? Postgres synchronized with all nodes and Redis use in memory cache with fresh data, produced by each workplace. Each 10 minutes data flushed at disk.
1) This is possible? How?
Also I'm tried to use Ignite - but my project works on spring boot 2. Spring data 2. And Ignite last released version is 2.6 and spring data 2 support will appears only in apache ignite 2.7!
2) I have to download 2.7 version nightly build, but how can I use it in my project? (need to install to local Maven repository?)
3) And after all, what will be the best architecture in that case? Datastore provider stores persistent data at disk, synchronized with each workspace In-memory cache and persist in-memory data to disk by timeout?
What will be the best solution and which database I should to choose?
(may be something works with existing PostgreSQL?)
Thx)
Your use case sounds like a common one with Hazelcast. You can store your data in memory (i.e. in an Hazelcast IMap), use a MapStore/MapLoader to persist changes to your database, or read from database. Persisting changes can be done in a write-through or write-behind manner based on your configuration. Also there is spring boot and spring-jpa integration available.
Also the amount of data you want to store is pretty big for 10-20 machines, so you might want to look into hazelcast High-Density Memory Store option to be able to store large amounts of data in commodity hardware without having GC problems.
Following links should give you further idea:
https://opencredo.com/spring-booting-hazelcast/
https://docs.hazelcast.org//docs/3.11/manual/html-single/index.html#loading-and-storing-persistent-data
https://hazelcast.com/products/high-density-memory-store/
Ignite is not suitable for that options, because JPA 1 supports only.
Redis isn't supports SQL queries.
Our choiсe is plain PostgreSQL master with slave replication. May be cockroachDB applies also.
Thx for help))

Couchbase: How to make cache persistence an option?

We have a memcached cluster running in production as a cache layer on top of MySQL. Now we are considering to replace memcached with Couchbase to avoid the cold cache issue (in case of crash) and have this nice feature of managed cache cluster.
At the same time, we want to minimize the changes to migrate to Couchbase. One way we can try is to maintain the libmemcached API and set up a http proxy to direct all request to Couchbase. This way nothing is changed in the application code. If I understand correctly, this way Couchbase is basically a managed memcache cluster. We didn't take advantage of the persistence cache items. We can't do something like flagging a certain cached item to be persistent:
# connect to couchbase like connecting to memcached
$ telnet localhost 11211
SET foo "foo value" # How can we make this item persistent in Couchbase?
I assume this is because all items are stored in memcached bucket. So the question becomes:
Can we control which item to be stored in Couchbase bucket or
memcache bucket? To do so, do we have to change libmemcached API and all the application code related to that?
Thanks!
I think you should look into running Moxi, which is a memcached proxy to couchbase. You can configure Moxi with the destination couchbase bucket.
A couchbase cluster automatically spins up a cluster-aware moxi gateway, which you could point your web/application servers to. This is what couchbase calls "server-side moxi".
Alternatively, you can either install moxi on each your web/app servers, so they simply connect to localhost:11211. Moxi handles the persistent connection the couchbase cluster. This is what couchbase calls "client-side moxi".

Redis replication for cached session in Django

I am developing the django-backend of a ios app. I will use cached-session using redis. Once a user logs in, I will save his session in the redis-cache (backed up by mysql), I just want to know (for the long run), can I use redis replication to keep copy of the cached session incase I scale the redis server in a master-slave format in the future. Or I should always access cache value from one particular redis server?
It makes sense to keep copy with replication of redis in master/slave format since there isn't the possibility of sharding like in mongodb for redis yet (AFAIK). So you have to get your session from one particual redis server until if you dont want to control several redis-server manually.

How to get all the cache names in an Infinispan cache cluster

I am using Infinispan with jgroups in java.
I want to get all the cache names in an infinispan cache cluster.
I have tried using
DefaultCacheManager.getCacheNames();
but it gives only caches which are accessed on that the jvm from which it is called from and not all the caches in that cluster.
Once i access a cache on that jvm, it becomes available and the it starts coming in the cachelist which i get from
DefaultCacheManager.getCacheNames();
I am using the same config file for infinispan and jgroups(using tcp).
Please suggest a way by which I can get all the cache names in a cluster.
Thanks,
Ankur
Hmmm, normally you'll have all caches defined cluster wide, so getting the cache names in a node is good enough to know the caches that are available cluster wide.
This doesn't seem to be your case though, so the easiest thing I can think of is to do a Map/Reduce functionality in Infinispan to retrieve the cache names from individual nodes in the cluster and then collate them.
For more info, see https://docs.jboss.org/author/display/ISPN/Infinispan+Distributed+Execution+Framework and https://www.jboss.org/dms/judcon/presentations/Boston2011/JUDConBoston2011_day2track2session2.pdf

Resources