I need to create a hostgroup with a master and n slaves.
Can I configure it in a way that all the requests will be served by the slaves DB only(not by the master), until and unless the slaves get shunned?
Yes, and when you mean by requests I assume you mean only READ requests and WRITEs will still be forwarded to the Master.
Make sure to configure mysql_replication_hostgroups and create Query Rules in mysql_query_rules table. Here's a blog post of a friend that I hope will get you started: proxysql-tutorial-master-and-slave
Related
I have two servers in a HA mode. I'd like to know if is it possible to deploy an application on the slave server? If yes, how to configure it in jgroups? I need to run a specific program that access the master database, but I would not like to run on master serve to avoid overhead on it.
JGroups itself does not know much about WildFly and the deployments, it only creates a communication channel between nodes. I don't know where you get the notion of master/slave, but JGroups always has single* node marked as coordinator. You can check the membership through Channel.getView().
However, you still need to deploy the app on both nodes and just make it inactive if this is not its target node.
*) If there's no split-brain partition, or similar rare/temporal issues
I have a scenario where we want to use redis, but I am not sure how to go about setting it up. Here is what we want to achieve eventually:
A redundant central redis cluster where all the writes will occur with servers in two aws regions.
Local redis caches on servers which will hold a replica of the complete central cluster.
The reason for this is that we have many servers which need read access only, and we want them to be independent even in case of an outage (where the server cannot reach the main cluster).
I know there might be a "stale data" issue withing the caches, but we can tolerate that as long as we get eventual consistency.
What is the correct way to achieve something like that using redis?
Thanks!
You need the Redis Replication (Master-Slave) Architecture.
Redis Replication :
Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:
Redis uses asynchronous replication. Starting with Redis 2.8, however, slaves periodically acknowledge the amount of data processed from the replication stream.
A master can have multiple slaves.
Slaves are able to accept connections from other slaves. Aside from connecting a number of slaves to the same master, slaves can also be connected to other slaves in a cascading-like structure.
Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more slaves perform the initial synchronization.
Replication is also non-blocking on the slave side. While the slave is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis slaves to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The slave will block incoming connections during this brief window (that can be as long as many seconds for very large datasets).
Replication can be used both for scalability, in order to have multiple slaves for read-only queries (for example, slow O(N) operations can be offloaded to slaves), or simply for data redundancy.
It is possible to use replication to avoid the cost of having the master write the full dataset to disk: a typical technique involves configuring your master redis.conf to avoid persisting to disk at all, then connect a slave configured to save from time to time, or with AOF enabled. However this setup must be handled with care, since a restarting master will start with an empty dataset: if the slave tries to synchronized with it, the slave will be emptied as well.
Go through the Steps : How to Configure Redis Replication.
So I decided to go with redis-sentinel.
Using a redis-sentinel I can set the slave-priority on the cache servers to 0, which will prevent them from becoming masters.
I will have one master set up, and a few "backup masters" which will actually be slaves with slave-priority set to a value which is not 0, which will allow them to take over once the master goes down.
The sentinel will monitor the master, and once the master goes down it will promote one of the "backup masters" and promote it to be the new master.
More info can be found here
I have 3 mesos master setup for a cluster with 3 different. But I have a silly question here, how a application can resolve which uri to access. Lets say I am just browsing the admin console, do I have to try all 3 ip:5050 to get the hit?
Since there is only ever one active (leading) Mesos Master in a HA setup you only need to access that one IP. There seems to be a second question intermingled with the Master-related one, which I think revolves around the general case of mapping Mesos tasks to IP:PORT combinations: for this, Mesos-DNS is a useful solution.
You should be able to try just one Master endpoint. It will redirect to the "real" Master automatically after 5 seconds if you hit a non-leading Master.
When using Mesos-DNS and add that to your local systems DNS preferences you just can enter http://leader.mesos:5050/ in your browser for example to access the currently leading masters WebUI.
I seem to be having trouble viewing my jms queues after applying failover transport to activemq. The queues can be viewed from the master using the usual url http://localhost:8162/admin/queues.jsp, but it does not work when tried on the slave. I need to see the queues created when the master is down and the slave takes over. Any idea how to make this work?
When both master and slave point to same folder for data repository this arrangement is called 'Master slave configuration with shared database', in this case following things occur
when your master node starts up, it acquires a lock on this database
and it starts up successfully, so you can access this nodes details
from UI.
but when a slave node starts up, it tries to gain lock of the database
, but as it is already locked by master node, it cant gain the lock
and keeps on polling the DB for lock and doesnt startup(This is an expected and correct behaviour)
Now whenever master node fails , it releases the lock and this lock
is gained by slave node(as it is continuously polling the DB),now it
gains the lock and starts up, by this only one node is up at any
given time and if that node fails slave node starts up
in your case, if you shut down your master node you will surely be able to access slave node from UI
Hope this helps!
Good luck!
I want to add a replica of our whole eDirectory tree to a new server (OES11.2 SLES11.3).
So I wanted to do so via iManager. (Partitions and Replicas / Replica View / Add Replica)
Everthing looks normal. I see our other servers with added replicas and of course the server with the master image.
For addition information: I did that a lot of times without problems until now.
When I want to add a replica to the new server, i get the following error: (Error -636) The server is unreachable.
I checked the /etc/hosts file and the network settings on both servers.
Ndsrepair looks normal too. All servers are in sync and there are no connection errors. The replica depth of the new server is -1. I get that, because there is no replica on it yet.
But if i can connect from one server to another and there are no error messages, why does adding a replica not work?
I also tried to make a LAN trace, but didn't get any information that would help me out here. In the trace the communication seems normal!
Am I forgetting something here?
Every server in our environment runs OES11.2 except the master server which runs OES11.1
Thanks for your help!
Daniel
Nothing wrong.
Error -636 means that the replica is not yet available at the new server. When will the synchronization, the replica will be ready and available. Depending on the size of the Tree and the communication channel we can wait for up to some hours.