How to get all the cache names in an Infinispan cache cluster - caching

I am using Infinispan with jgroups in java.
I want to get all the cache names in an infinispan cache cluster.
I have tried using
DefaultCacheManager.getCacheNames();
but it gives only caches which are accessed on that the jvm from which it is called from and not all the caches in that cluster.
Once i access a cache on that jvm, it becomes available and the it starts coming in the cachelist which i get from
DefaultCacheManager.getCacheNames();
I am using the same config file for infinispan and jgroups(using tcp).
Please suggest a way by which I can get all the cache names in a cluster.
Thanks,
Ankur

Hmmm, normally you'll have all caches defined cluster wide, so getting the cache names in a node is good enough to know the caches that are available cluster wide.
This doesn't seem to be your case though, so the easiest thing I can think of is to do a Map/Reduce functionality in Infinispan to retrieve the cache names from individual nodes in the cluster and then collate them.
For more info, see https://docs.jboss.org/author/display/ISPN/Infinispan+Distributed+Execution+Framework and https://www.jboss.org/dms/judcon/presentations/Boston2011/JUDConBoston2011_day2track2session2.pdf

Related

Infinispan. Can I have multiple clusters/namespaces/groups inside single server instance?

I want to connect to a single infinispan server instance from multiple environments grouping caches in namespaces/groups. Can be possible? I use micronaut cache with infinispan 13. Thanks
Unfortunately it is no longer possible to use multiple cache containers (which would have given you this functionality).

Ehcache cache the data with support cluster disk storage (replication)

I am using Ehcache technologies in our application and it is working in single server data persist in disk, if we moving to production, we have two different server which is clustered in our application.
if first request comes to server A, it will cache respect server A OS disk level cached data and working fine, similarly, if request goes to server B, the application can not find the cached data because the the cached disk object in server A disk. how to we replicate both disk in our ehcache-config.xml?
I recommend looking into clustering support with Terracotta.
See the documentation for this:
Ehcache 3.x line
Ehcache 2.x line

Can I distribute the data from one machine and have this data be read from all nodes in the cluster?

We have to work with some Global Parameters. We want to calculate these parameters on one machine and distribute them to the Ignite Cluster, can we do this?
According to the guidance of the official website, Ignite is an distributed clustering which has no master or slaver or standby.
By the way, we need to use the ignite lightly first, we will use it as a Spring's bean in our Distributed System.
Yes, you can do that easily with a REPLICATED Ignite cache. Every value that you put to that cache will be copied to all other nodes in cluster. Data won't be lost while at least one node still runs.
Here is some documentation on cache modes.

Infinispan remote cache access

I am a newbie to infinispan and learning by experimenting. I need some help after I failed trying to access a remote cache of different name. Here is the my scenario of infinispan client-server mode not embedded.
1) I started node1 in infinispan cluster and set the default remote cache name to node1_cache. --Hotrod Server started
2) Started node2 in infinispan cluster and set the default remote cache name to node2_cache. --Hotrod Server started
Now in from the Hotrod client I can see the RemoteCacheManager can initialize properly and also the cluster is being setup properly and nodes are getting added to each other in the console.
But the problem is from one single client
1)when I am trying to get the RemoteCache using the name node1_cache, I am getting the instance.
2) But when I try to access the node2_cache, it giving me null for the RemoteCache instance.
Now Am I correct in accessing such way or am I missing anything in this ?
Is it not that a single client can access all the caches of all the node configured across the cluster ?
Please guide me. Thank you.
After a good amount of digging about concepts of distributed cache, I figured out the following concept.
1) I was using two cluster-configuration files for two infinispan nodes, one having dist cache name as node1_cache and other node2_cache.
2) What I figured out that if you have multiple caches with different names, then all those caches must be defined in all the configuration files of the infinispan hot-rod servers in the same cluster. That means in this case both config files must have node1_cache and node2_cache name defined. Then only we can access and use both the caches when we say
remoteCacheManager.getCache("cacheName");.

Solutions for a secure distributed cache

Problem: I want to cache user information such that all my applications can read the data quickly, but I want only one specific application to be able to write to this cache.
I am on AWS, so one solution that occurred to me was a version of memcached with two ports: one port that accepts read commands only and one that accepts reads and writes. I could then use security groups to control access.
Since I'm on AWS, if there are solutions that use out-of-the box memcached or redis, that'd be great.
I suggest you use ElastiCache with one open port at 11211(Memcached)then create an EC2 instance, set your security group so only this server can access to your ElastiCache cluster. Use this server to filter your applications, so only one specific application can write to it. You control the access with security group, script or iptable. If you are not using VPC, then you can use cache security group.
I believe you can accomplish this using Redis (instead of Memcached) which is also available via ElastiCache. Once the instance has been created, you will want to create a replication group and associate it to the cache cluster you already launched.
You can then add instances to the replication group. Instances within the replication group are simply replicated from the Master Cache Cluster (single Redis instance) and so are (by default) read-only.
So, in this setup, you have a master node (single endpoint) that you can write to and as many read nodes (multiple endpoints) as you would like.
You can take security a step further and assign different routing rules to the replication group (via the VPC) so the applications reading data does not have access to the master node (the only one that can write data).

Resources