I am a newbie to infinispan and learning by experimenting. I need some help after I failed trying to access a remote cache of different name. Here is the my scenario of infinispan client-server mode not embedded.
1) I started node1 in infinispan cluster and set the default remote cache name to node1_cache. --Hotrod Server started
2) Started node2 in infinispan cluster and set the default remote cache name to node2_cache. --Hotrod Server started
Now in from the Hotrod client I can see the RemoteCacheManager can initialize properly and also the cluster is being setup properly and nodes are getting added to each other in the console.
But the problem is from one single client
1)when I am trying to get the RemoteCache using the name node1_cache, I am getting the instance.
2) But when I try to access the node2_cache, it giving me null for the RemoteCache instance.
Now Am I correct in accessing such way or am I missing anything in this ?
Is it not that a single client can access all the caches of all the node configured across the cluster ?
Please guide me. Thank you.
After a good amount of digging about concepts of distributed cache, I figured out the following concept.
1) I was using two cluster-configuration files for two infinispan nodes, one having dist cache name as node1_cache and other node2_cache.
2) What I figured out that if you have multiple caches with different names, then all those caches must be defined in all the configuration files of the infinispan hot-rod servers in the same cluster. That means in this case both config files must have node1_cache and node2_cache name defined. Then only we can access and use both the caches when we say
remoteCacheManager.getCache("cacheName");.
Related
I have two servers in a HA mode. I'd like to know if is it possible to deploy an application on the slave server? If yes, how to configure it in jgroups? I need to run a specific program that access the master database, but I would not like to run on master serve to avoid overhead on it.
JGroups itself does not know much about WildFly and the deployments, it only creates a communication channel between nodes. I don't know where you get the notion of master/slave, but JGroups always has single* node marked as coordinator. You can check the membership through Channel.getView().
However, you still need to deploy the app on both nodes and just make it inactive if this is not its target node.
*) If there's no split-brain partition, or similar rare/temporal issues
Problem: I want to cache user information such that all my applications can read the data quickly, but I want only one specific application to be able to write to this cache.
I am on AWS, so one solution that occurred to me was a version of memcached with two ports: one port that accepts read commands only and one that accepts reads and writes. I could then use security groups to control access.
Since I'm on AWS, if there are solutions that use out-of-the box memcached or redis, that'd be great.
I suggest you use ElastiCache with one open port at 11211(Memcached)then create an EC2 instance, set your security group so only this server can access to your ElastiCache cluster. Use this server to filter your applications, so only one specific application can write to it. You control the access with security group, script or iptable. If you are not using VPC, then you can use cache security group.
I believe you can accomplish this using Redis (instead of Memcached) which is also available via ElastiCache. Once the instance has been created, you will want to create a replication group and associate it to the cache cluster you already launched.
You can then add instances to the replication group. Instances within the replication group are simply replicated from the Master Cache Cluster (single Redis instance) and so are (by default) read-only.
So, in this setup, you have a master node (single endpoint) that you can write to and as many read nodes (multiple endpoints) as you would like.
You can take security a step further and assign different routing rules to the replication group (via the VPC) so the applications reading data does not have access to the master node (the only one that can write data).
I'm deploying my Grails (2.3.6) app with the Grails Standalone App Runner plugin, like so:
grails -Dgrails.env=prod build-standalone myapp.jar --tomcat
Then, my CI build places myapp.jar onto my app server, say, myapp01.
I now want to cluster app sessions when myapp is running on multiple nodes. So if myapp gets deployed to myapp01, myapp02 and myapp03, and one of those instances starts a new session with a user, I want all 3 to be aware of the same session. This is obviously so I can put all the nodes behind a load balanced URL (http://myapp.example.com, etc.) and it doesn't matter what node you get routed to: all nodes share the same sessions.
I googled "grails session clustering" and see a bunch of articles that seem to require terracotta, but I also heard that Grails has built-in session clustering facilities. But any searches I do come back empty-handed.
So I ask: How can I achieve this kind of session clustering with an embedded Tomcat?
Besides the seesion-cookie plugin that #injecteer proposed, there are several other plugins allowing to keep sessions in a shared storage (DB, mongodb, redis, memcached) that can be accessed by any of your tomcat instances. Take a look at these:
http://grails.org/plugin/database-session
http://grails.org/plugin/mongodb-session
http://grails.org/plugin/redis-database-session
http://grails.org/plugin/standalone-tomcat-redis
http://grails.org/plugin/standalone-tomcat-memcached
I never heard of something like this out-of-box. I would give 2 options a try:
Use a session-cookie plugin, with which you decouple your clients from storing the sessions in tomcat
Use or implement persistent sessions, which are stored in some sort of DB and are not bound to any tomcat instance.
You could achieve this by using the tomcat build-in functionality. Tomcat instance node could replicate session from others, then all the session get shared between nodes.
You can do this in at least three ways:
Session Replication by using Muilcast between instance nodes.
Session Replication just between primary and secondary node backup.
Session Replication between Static Memberships, this one is useful when the multicast cannot be enabled or supported such as in AWS EC2 Env.
Reference:
http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html
http://khaidoan.wikidot.com/tomcat-cluster-session-replication
We are switching from a non-clustered to a 2-node clustered MSMQ Windows Server 2008 R2 SP1 Enterprise environment. Previously, when it was non-clustered, we wrote a .NET 3.5 C# Windows Form application to help us manage our environment (so it does tasks such as create queues with the right permissions, read messages, forward messages, etc.). I would like to make this application work with our new cluster.
Per these articles,
http://blog.terranspot.com/2011/07/accessing-microsoft-message-queuing.html
http://blogs.msdn.com/b/johnbreakwell/archive/2008/02/18/clustering-msmq-applications-rule-1.aspx
I understand that I need to add the application as a resource on the cluster as when I don't, I am accessing the node's MSMQ instance. To help with my debugging, I have turned the local MSMQ services off. No matter what I do, however, the program keeps trying to access the node's instance. I added it as an application resource (with the command line of "Q:\QueueManagerConsole.exe". The Q:\ is the disk that is shared between the 2 nodes that is part of the failover cluster), but when I run it via Windows Explorer, it doesn't see the cluster instance, only the local. I have seen no way to execute a program from Failover Cluster Manager, so I don't understand what I am doing wrong. I switched the code to access everything via "." (so MessageQueue.GetPrivateQueuesByMachine(".")), which, per my meager understanding is how you access the local queue. Could someone explain, preferably acting as if I had no clue what I was doing, on a. if this IS possible and b. HOW to do this correctly?
Hi I did something similar a while ago. Try deploy a service in a failover cluster
, it wokerd for me to:
configure the app to use clustered msmq
configure app as clustered resource
configure the app to connect under host name
set the permission set rquired for transpot
At least this will give you a good starting point.
I finally got this working by creating a shortcut to the application and putting it on the server that was actually accessing the clustered queues.
Please try add to environment used by Your application following Environment variables:
_CLUSTER_NETWORK_NAME_
_CLUSTER_NETWORK_HOSTNAME_
with cluster server name as a value. It worked in the system which is being developed by my team - it contains a few services which had to access clustered MSMQ and it solved the problem.
I am using Infinispan with jgroups in java.
I want to get all the cache names in an infinispan cache cluster.
I have tried using
DefaultCacheManager.getCacheNames();
but it gives only caches which are accessed on that the jvm from which it is called from and not all the caches in that cluster.
Once i access a cache on that jvm, it becomes available and the it starts coming in the cachelist which i get from
DefaultCacheManager.getCacheNames();
I am using the same config file for infinispan and jgroups(using tcp).
Please suggest a way by which I can get all the cache names in a cluster.
Thanks,
Ankur
Hmmm, normally you'll have all caches defined cluster wide, so getting the cache names in a node is good enough to know the caches that are available cluster wide.
This doesn't seem to be your case though, so the easiest thing I can think of is to do a Map/Reduce functionality in Infinispan to retrieve the cache names from individual nodes in the cluster and then collate them.
For more info, see https://docs.jboss.org/author/display/ISPN/Infinispan+Distributed+Execution+Framework and https://www.jboss.org/dms/judcon/presentations/Boston2011/JUDConBoston2011_day2track2session2.pdf