disabling automated backups for Aurora Serverless cluster - amazon-aurora

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html says that PreferredBackupWindow is used if automated backups are enabled using the BackupRetentionPeriod parameter.
It also says that BackupRetentionPeriod Must be a value from 1 to 35.
It is actually possible to disable the automated back-ups? Setting BackupRetentionPeriod to 0 using CloudFormation return the following error: Invalid backup retention period: 0. Retention period must be between 1 and 35.

Unfortunately, you can't disable automated backups on Aurora. Even if you wanted to work around the issue by finding the most recent backup with
aws rds describe-db-cluster-snapshots --db-cluster-identifier=dbname | jq -r .DBClusterSnapshots[].DBClusterSnapshotIdentifier | tail -n1
and then attempting to manually delete the backup with
aws rds delete-db-cluster-snapshot --db-cluster-snapshot-identifier rds:dbname-2021-03-30-04-56
this results in the error
An error occurred (InvalidDBClusterSnapshotStateFault)
when calling the DeleteDBClusterSnapshot operation:
Only manual snapshots may be deleted.

It appears that you have a read replica for your DB instance. Unfortunately due to the instance having read replicas connecting to it, you won't be able to set backup retention to 0. Backups are required to create and manage read replicas binary logs.
"Before a DB instance can serve as a replication source, you must enable automatic backups on the source DB instance by setting the backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance for another read replica."

Related

Fixing ERROR: cluster setting 'kv.rangefeed.enabled' is currently overridden by the operator in CockroachDB Serverless

I'm following a guide for setting up a changefeed on CockroachDB, but right from the start I get the error cluster setting 'kv.rangefeed.enabled' is currently overridden by the operator. How can I enable changefeeds?
In CockroachDB Serverless, it's not necessary to set kv.rangefeed.enabled--you can just skip that part of the setup. If you're setting up a changefeed to write to external endpoints, you may need to have a credit card on file in your Serverless account, but you can keep your spend limit set to $0 and still run changefeeds.

Manually save memgraph snapshot

As said in /etc/memgraph/memgraph.conf:
# Storage snapshot creation interval (in seconds). Set to 0 to disable periodic
# snapshot creation. [uint64]
--storage-snapshot-interval-sec=300
Which means snapshots are made only automatically. Is there a way to manually run command to save snapshot in /var/lib/memgraph/snapshots/ directory?
You can use the CREATE SNAPSHOT; Cypher query to create a snapshot automatically. Read more about it in the official documentation.

Terraform and OCI : "The existing Db System with ID <OCID> has a conflicting state of UPDATING" when creating multiple databases

I am trying to create 30 databases (oci_database_database resource) under 5 existing db_homes. All of these resources are under a single DB System :
When applying my code, a first database is successfully created then when terraform attempts to create the second one I get the following error message : "Error: Service error:IncorrectState. The existing Db System with ID has a conflicting state of UPDATING", which causes the execution to stop.
If I re-apply my code, the second database is created then I get the same previous error when terraform attempts to create the third one.
I am assuming I get this message because terraform starts creating the following database as soon as the first one is created, but the DB System status is not up to date yet (still 'UPDATING' instead of 'AVAILABLE').
A good way for the OCI provider to avoid this issue would be to consider a database creation as completed when the creation is indeed completed AND the associated db home and db system's status are back to 'AVAILABLE'.
Any suggestion on how to adress the issue I am encountering ?
Feel free to ask if you need any additional information.
Thank you.
As mentioned above, it looks like you have opened a ticket regarding this via github. What you are experiencing should not happen, as terraform should retry after seeing the error. As per your github post, the person helping you is in need of your log with timestamp so they can better troubleshoot. At this stage I would recommend following up there and sharing the requested info.

AWS RDS database can't read record that was just written to database

I'm seeing an error with some Laravel code that uses an AWS RDS database. The code writes a record to the database and then immediately does a search to load that record using the primary key and gets no results.
If I try it manually afterwards I find the record. If I insert a 1-second sleep in the code it works correctly.
I've tried this using Laravel's separate settings for read and write hosts. I've also tried setting them to the same host and only using one host. The result is always the same. However other environments with the same configuration do not have the error.
Is there an option in RDS that needs to be changed to have the record available immediately after it's written.
The error is due to the mySQL master-slave replication lag.
A common mistake is to use a mySQL cluster and then perform a read
immediately after a write.
Since the read occurs on one of the slave/read hosts and the write occurs on the master, the data would not be replicated at the time of the read.
There are a couple of ways to rectify the error:
The read immediately after must be performed on the master (not the slave). Even though you've mentioned that you changed it to a single host, often people make a mistake while switching the connection. Refer this SO post to properly switch connections in Laravel
An easier way may be to use the sticky database option in Laravel. Beware: this may cause performance issues if not used carefully for only the use case you desire. From the docs:
The sticky option is an optional value that can be used to allow the
immediate reading of records that have been written to the database
during the current request cycle.
If the sticky option is enabled and a "write" operation has been
performed against the database during the current request cycle, any
further "read" operations will use the "write" connection.
The most "non-obvious" way is to NOT perform a read immediately after a write. Think about whether this can be avoided depending on your use case.
Other methods: refer this SO post

MySQL database backup: performance issues

Folks,
I'm trying to set up a regular backup of a rather large production database (half a gig) that has both InnoDB and MyISAM tables. I've been using mysqldump so far, but I find that it's taking increasingly longer periods of time, and the server is completely unresponsive while mysqldump is running.
I wanted to ask for your advice: how do I either
Make mysqldump backup non-blocking - assign low priority to the process or something like that, OR
Find another backup mechanism that will be better/faster/non-blocking.
I know of the existence of MySQL Enterprise Backup product (http://www.mysql.com/products/enterprise/backup.html) - it's expensive and this is not an option for this project.
I've read about setting up a second server as a "replication slave", but that's not an option for me either (this requires hardware, which costs $$).
Thank you!
UPDATE: more info on my environment: Ubuntu, latest LAMPP, Amazon EC2.
If replication to a slave isn't an option, you could leverage the filesystem, depending on the OS you're using,
Consistent backup with Linux Logical Volume Manager (LVM) snapshots.
MySQL backups using ZFS snapshots.
The joys of backing up MySQL with ZFS...
I've used ZFS snapshots on a quite large MySQL database (30GB+) as a backup method and it completes very quickly (never more than a few minutes) and doesn't block. You can then mount the snapshot somewhere else and back it up to tape, etc.
Edit: (previous answer was suggestion a slave db to back up from, then I noticed Alex ruled that out in his question.)
There's no reason your replication slave can't run on the same hardware, assuming the hardware can keep up. Grab a source tarball, ./configure --prefix=/dbslave; make; make install; and you'll have a second mysql server living completely under /dbslave.
EDIT2: Replication has a bunch of other benefits, as well. For instance, with replication running, you'll may be able to recover the binlog and replay it on top your last backup to recover the extra data after certain kinds of catastrophes.
EDIT3: You mention you're running on EC2. Another, somewhat contrived idea to keep costs down is to try setting up another instance with an EBS volume. Then use the AWS api to spin this instance up long enough for it to catch up with writes from the binary log, dump/compress/send the snapshot, and then spin it down. Not free, and labor-intensive to set up, but considerably cheaper than running the instance 24x7.
Try mk-parallel-dump utility from maatkit (http://www.maatkit.org/)
regards,
Something you might consider is using binary logs here though a method called 'log shipping'. Just before every backup, issue out a command to flush the binary logs and then you can copy all except the current binary log out via your regular file system operations.
The advantage with this method is your not locking up the database at all, since when it opens up the next binary log in sequence, it releases all the file locks on the prior logs so processing shouldn't be affected then. Tar'em, zip'em in place, do as you please, then copy it out as one file to your backup system.
An another advantage with using binary logs is you can restore up to X point in time if the logs are available. I.e. You have last year's full backup, and every log from then to now. But you want to see what the database was on Jan 1st, 2011. You can issue a restore 'until 2011-01-01' and when it stops, your at Jan 1st, 2011 as far as the database is concerned.
I've had to use this once to reverse the damage a hacker caused.
It is definately worth checking out.
Please note... binary logs are USUALLY used for replication. Nothing says you HAVE to.
Adding to what Rich Adams and timdev have already suggested, write a cron job which gets triggered on low usage period to perform the slaving task as suggested to avoid high CPU utilization.
Check mysql-parallel-dump also.

Resources