I have a mysql master-slave configuration in which the replication is instant.
I would like replication to be 60 or x number of minutes behind of the master.
How do I accomplish this?
I read that mysql 5.6 has such option but I couldn't get an info for my mysql version which is 5.5.
Cheers,
D
http://www.percona.com/doc/percona-toolkit/2.2/pt-slave-delay.html
pt-slave-delay watches a slave and starts and stops its replication SQL thread as necessary to hold it at least as far behind the master as you request. In practice, it will typically cause the slave to lag between --delay and --delay"+"--interval behind the master.
Related
I have a scenario where we want to use redis, but I am not sure how to go about setting it up. Here is what we want to achieve eventually:
A redundant central redis cluster where all the writes will occur with servers in two aws regions.
Local redis caches on servers which will hold a replica of the complete central cluster.
The reason for this is that we have many servers which need read access only, and we want them to be independent even in case of an outage (where the server cannot reach the main cluster).
I know there might be a "stale data" issue withing the caches, but we can tolerate that as long as we get eventual consistency.
What is the correct way to achieve something like that using redis?
Thanks!
You need the Redis Replication (Master-Slave) Architecture.
Redis Replication :
Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:
Redis uses asynchronous replication. Starting with Redis 2.8, however, slaves periodically acknowledge the amount of data processed from the replication stream.
A master can have multiple slaves.
Slaves are able to accept connections from other slaves. Aside from connecting a number of slaves to the same master, slaves can also be connected to other slaves in a cascading-like structure.
Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more slaves perform the initial synchronization.
Replication is also non-blocking on the slave side. While the slave is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis slaves to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The slave will block incoming connections during this brief window (that can be as long as many seconds for very large datasets).
Replication can be used both for scalability, in order to have multiple slaves for read-only queries (for example, slow O(N) operations can be offloaded to slaves), or simply for data redundancy.
It is possible to use replication to avoid the cost of having the master write the full dataset to disk: a typical technique involves configuring your master redis.conf to avoid persisting to disk at all, then connect a slave configured to save from time to time, or with AOF enabled. However this setup must be handled with care, since a restarting master will start with an empty dataset: if the slave tries to synchronized with it, the slave will be emptied as well.
Go through the Steps : How to Configure Redis Replication.
So I decided to go with redis-sentinel.
Using a redis-sentinel I can set the slave-priority on the cache servers to 0, which will prevent them from becoming masters.
I will have one master set up, and a few "backup masters" which will actually be slaves with slave-priority set to a value which is not 0, which will allow them to take over once the master goes down.
The sentinel will monitor the master, and once the master goes down it will promote one of the "backup masters" and promote it to be the new master.
More info can be found here
I'm using a ZooKeeper cluster (3 mchines) for my Storm cluster (4 machines). The problem is that -because of the topologies deployed on the storm cluster- the zookeeper transactional logs grow to be extremly large making the zookeeper desk to be full and what is really strange that those logs are not devided into multiple files instead I'm having one big transactional file in every zookeeper machine! making the autopurge in my zookeeper configuration not to have any affect on those files.
Is there a way to solve this problem from zookeeper side, or can I change the way storm uses zookeeper to minimize the size of those logs?
Note: I'm using zookeeper 3.6.4 and Storm 0.9.6 .
I was able to resolve this problem by using Pacemarker to process heartbeats from workers instead of zookeeper; That allowed me to avoid writting to zookeeper disk in order to maintain consistency and use in-memory store instead. In order to be able to use Pacemaker I upgraded to Storm-1.0.2.
I have an application running on two servers. There is a mariadb running on each one of them and a galera cluster that takes care of the replication.
When upgrading the app I need to stop the replication, so I wanted some guidelines on how I can start up the db outside the cluster for one of the servers and then what's the best way to reconnect it.
All ideas are appreciated
Thanks in advance
You can remove the node from the cluster by setting 'wsrep_cluster_address=gcomm://' in the config file of the node.Then restart the server,this time the node will be running outside the cluster,hence,no replication.Similarly,to reconnect change the wsrep_cluster_address to whatever it was earlier and restart the node.
Hope it helps.
My postgresql server seems to be intermittently going down. I have PgBouncer pool in front of it, so the website hits are well managed, or were until recently.
When I explore what's up with top command, I see the postmaster doing some CLUSTER. There's no cluster command in any of my cronjobs though. Is this what autovacuum is called these days?
How can I start to find out what's happening. What commands are the usual tricks in a PGSQL DBA's toolbox? I'm a bit new to this database, and only looking for starting points.
Thank you!
No, autovacuum never runs CLUSTER. You have something on your system that's doing so - daemon, cron job, or whatever. Check individual user crontabs.
CLUSTER takes an exclusive lock on the table. So that's probably why you think the system is "going down" - all queries that access this table will wait for the CLUSTER to complete.
The other common cause of people reporting intermittent issues is checkpoints taking a long time on slow disks. You can enable checkpoint logging to see if that's an issue. There's lots of existing info on ealing with checkpointing performance issues, so I won't repeat it here.
The other key tools are:
The pg_stat_activity and pg_locks views
pg_stat_statements
The PostgreSQL logs, with a useful log_line_prefix, log_checkpoints enabled, a log_min_duration_statement, etc
auto_explain
I am using SolrCloud (version 4.7.1) with 4 instances and embedded ZooKeeper (test environment).
When I simulate failure of one of the instances, the indexing speed goes from 4 seconds to 17 seconds.
It goes back to 4 seconds after the instance is brought back to life.
Search speed is not affected.
Our production environment shows similar behavior (only the configuration is more complex).
Is this normal or did I miss some configuration option?
It is due to having Zookeeper embedded in Solr cluster.
Please try with external zookeeper. This setup give the expected results.