how to rollback leveldb to a previous state? - leveldb

I'm preety new to LevelDB. I need something like "rollback to a specific state", does LevelDB support that? After some search, I know that LevelDB does not support transactions, but it support snapshots. Can I restore my database to a snapshot?
My need is like this:
Initial state
Make some change to the database
If anything wrong, go back to initial state.

LevelDB snapshot are not meant for reverting change but to allow other readers from the same thread to have access to a consistent version of the database.
One way to solve you requirements is to implement an transaction log yourself:
Begin transaction and take snapshot
Log in memory every key that is modified
Then Commit, release the snapshot and the log
Or Rollback, look up the every modified keys in the log and retrieve their initial value from the snapshot and set it again. Release the snapshot and the log.

Related

How to use a recent Oracle backup file (from yesterday) and only online redo logs to recover the database in another location (disaster recovery)?

I would like to plan and test my database recovery in another site (another instance on another server in disaster recovery site).
I take a monthly RMAN level 0 image copy every month and daily incremental level 1 backups.
The database is running in noarchivelog mode. The online redo logs are multiplexed to a disk in the disaster recovery site. Also we have a recovery catalog on another server.
I want to test restoring the recent (yesterday) backup to database in disaster recovery site and then recover to just apply the online redo log files, how to achieve that?
side question: Is it sufficient to recover if we only have a yesterday backup and the online redo logs containing all transactions of today and none of them was overwritten? Since the database is in noarchivelog mode.
What is the use of archivelog mode if we have a daily backup and the redo logs are not overwritten during the day until the backup is taken?
what is the use of backing up archive logs?
You are working with a dangerous setup since you seem to be betting on redo log files that are never filled up between your backups. When your data has no value, go ahead, otherwise switch to archive log mode.
Archives are created when a redo log group fills up. So, in your case you need to copy the online redo log files manually to the remote site for recovery.
How sure are you about the redo log files not being overwritten?
Be sensible and if this is production switch to archive log mode. Otherwise, promise not te make promises about being able to make point in time recoveries.
An other point: if your online redo log files are damaged, your database has a big problem and in your case you might loose a day worth of work. Is that OK? If not, reduce the size of the redo log files to a limit where it does make a switch every now and then. I am sure your company has an idea about how much time they can accept loosing transactions from. Many companies allow less than one hour transaction loss.

Why would GlobalKTable with in-memory key-value store not be filled out at restart?

I am trying to figure out how GlobalKTable is working and noticed that my in memory key value store is not filled in case a restart. However the documentation sounds that it is filled in case a restart since the whole data is duplicated on clients.
When I debug my application see that there is a file on /tmp/kafka-streams/category-client-1/global/.checkpoint and it is including an offset about my topic. This might be maybe necessary for stores which are persisting their data and improve restarts however since there is an offset in this file, my application skips restoring its state.
How can I be sure that each restart or fresh start includes whole data of my topic?
Because you are using in-memory store I assume that you are hitting this bug: https://issues.apache.org/jira/browse/KAFKA-6711
As a workaround, you can delete the local checkpoint file for the global store -- this will trigger the bootstrapping on restart. Or you switch back to default RocksDB store.

Rethinkdb -- how to recover the accident deleted docs

I just delete a whole table in production... How could I perform a recovery or undo that delete ?
There is no backup
That sucks! Unfortunately RethinkDB doesn't keep data around after you delete it. Sometimes deleted data is still on disk somewhere if it hasn't been overwritten yet. If you google "{NAME OF YOUR OPERATING SYSTEM} recover deleted data" you should be able to find instructions on how to get everything salvageable. I'd recommend trying to keep write load as low as possible on the disk until you manage to recover whatever you can.

Reset teamcity.build.id?

Is it possible to reset the teamcity.build.id value in teamcity?
I'm able to reset the build.counter value easily in the build configuration, but I haven't found anything to reset teamcity.build.id.
There is no way to reset build id, because it is used to identify related records in database storage. Resetting it would ruin your build history.
Starting from TeamCity 8.0 build IDs are editable, so you can assign whatever value you want (of course they must be unique).
It provides you also a "Bulk Edit" feature.
More info here.

MySQL database backup: performance issues

Folks,
I'm trying to set up a regular backup of a rather large production database (half a gig) that has both InnoDB and MyISAM tables. I've been using mysqldump so far, but I find that it's taking increasingly longer periods of time, and the server is completely unresponsive while mysqldump is running.
I wanted to ask for your advice: how do I either
Make mysqldump backup non-blocking - assign low priority to the process or something like that, OR
Find another backup mechanism that will be better/faster/non-blocking.
I know of the existence of MySQL Enterprise Backup product (http://www.mysql.com/products/enterprise/backup.html) - it's expensive and this is not an option for this project.
I've read about setting up a second server as a "replication slave", but that's not an option for me either (this requires hardware, which costs $$).
Thank you!
UPDATE: more info on my environment: Ubuntu, latest LAMPP, Amazon EC2.
If replication to a slave isn't an option, you could leverage the filesystem, depending on the OS you're using,
Consistent backup with Linux Logical Volume Manager (LVM) snapshots.
MySQL backups using ZFS snapshots.
The joys of backing up MySQL with ZFS...
I've used ZFS snapshots on a quite large MySQL database (30GB+) as a backup method and it completes very quickly (never more than a few minutes) and doesn't block. You can then mount the snapshot somewhere else and back it up to tape, etc.
Edit: (previous answer was suggestion a slave db to back up from, then I noticed Alex ruled that out in his question.)
There's no reason your replication slave can't run on the same hardware, assuming the hardware can keep up. Grab a source tarball, ./configure --prefix=/dbslave; make; make install; and you'll have a second mysql server living completely under /dbslave.
EDIT2: Replication has a bunch of other benefits, as well. For instance, with replication running, you'll may be able to recover the binlog and replay it on top your last backup to recover the extra data after certain kinds of catastrophes.
EDIT3: You mention you're running on EC2. Another, somewhat contrived idea to keep costs down is to try setting up another instance with an EBS volume. Then use the AWS api to spin this instance up long enough for it to catch up with writes from the binary log, dump/compress/send the snapshot, and then spin it down. Not free, and labor-intensive to set up, but considerably cheaper than running the instance 24x7.
Try mk-parallel-dump utility from maatkit (http://www.maatkit.org/)
regards,
Something you might consider is using binary logs here though a method called 'log shipping'. Just before every backup, issue out a command to flush the binary logs and then you can copy all except the current binary log out via your regular file system operations.
The advantage with this method is your not locking up the database at all, since when it opens up the next binary log in sequence, it releases all the file locks on the prior logs so processing shouldn't be affected then. Tar'em, zip'em in place, do as you please, then copy it out as one file to your backup system.
An another advantage with using binary logs is you can restore up to X point in time if the logs are available. I.e. You have last year's full backup, and every log from then to now. But you want to see what the database was on Jan 1st, 2011. You can issue a restore 'until 2011-01-01' and when it stops, your at Jan 1st, 2011 as far as the database is concerned.
I've had to use this once to reverse the damage a hacker caused.
It is definately worth checking out.
Please note... binary logs are USUALLY used for replication. Nothing says you HAVE to.
Adding to what Rich Adams and timdev have already suggested, write a cron job which gets triggered on low usage period to perform the slaving task as suggested to avoid high CPU utilization.
Check mysql-parallel-dump also.

Resources