Do I have to backup the full dataset of NebulaGraph database on the AWS cloud every time? - database-backups

I am using Nebula Graph on AWS and need to back up the data every day in case any data is lost. I use the Backup & Restore feature to do this but it does not support incremental data backup. I really don't want to back up all the data every time.
Is there a way to do the backup based on the latest backup?
I searched for their documentation but got nothing.

Related

How do I clean up the botstate db in Cosmos DB

I'd like to periodically clean up old conversations/user states, e.g. older than 30 days.
Is there a script that can do that?
There is no direct way of achieving this currently through the framework.
I have been able to achieve this in one simple way.
Assuming you are using CosmosDB(SQL) for your state management, you could set the TTL for that container. This will help deleting the documents that are 30days since the last update done to them.
You could also do the same by setting the TTL, when you first create the container through your Bot SDK, that way you wouldnt need to manually change it on portal.

Backup strategy ubuntu laravel

I am searching for a backup strategy for my web application files.
I am hosting my (laravel) application at an ubuntu (18.04) server in the cloud and currently have around 80GB of storage that needs to be backed up (this grows fast). The biggest files are around ~30mb, the rest of it are small jpg/txt/pdf files.
I want to make at least 2 times a day a full backup of the storage directory and store it as a zip file on a local server. I have 2 reasons for this: independence from cloud providers, and for archiving.
My first backup strategy was to zip all the contents of the storage folder en rsync the zip, this goes well until a couple of gigabytes then the server is completely stuck on cpu usage.
My second approach is with rsync, but this i can't track when a file is deleted / added.
I am looking for a good backup strategy that preferable generate zips before or after backup and stores them so we can browse and examine back in time.
Strange enough i could not find anything that suits me, i hope anyone can help me out.
I agree with #RobertFridzema that the whole server becomes unresponsive when using ZIP functionality from spatie package.
Had the same situation with a customer project. My suggestion is to keep the source code files within version control. Just backup the dynamic/changing files with rsync (incremental works best and fast) and create a separate database backup strategy. For example with MySQL/Mariadb: mysqldump, encrypt the resulting file and move it to an external storage as well.
If ZIP creation still is a problem, I would maybe use a storage which is already set up with raid functionality or if that is not possible, I would definitly not use the ZIP functionality on the live server. rsync incremental to another server and do the backup strategy there.
Spatie has a package for Laravel backups that can be scheduled in the laravel job scheduler. It will create zips with the entire project including storage dirs
https://github.com/spatie/laravel-backup

How do you switch between standard and serverless configurations in Amazon Aurora

I am looking a this Amazon page - https://aws.amazon.com/rds/aurora/serverless/ and it has this quote:
You pay on a per-second basis for the database capacity you use when
the database is active, and migrate between standard and serverless
configurations with a few clicks in the AWS Management Console.
I have a few normal Aurora clusters and want to switch them to serverless. I have looked and looked and cannot find the "migrate with a few clicks" bit in the Amazon user interface. I made a new serverless cluster just fine and so I could do a stop, backup, and restore with a short outage - but If I can do this without an outage - that would be far superior.
So where are these "few clicks" - or perhaps you will tell me the "few clicks" means stop, backup, and restore. Either way I think a lot of folks could benefit from knowing what "few clicks" make this happen.
As a comment on #drchuck's approach - We've learned this the hard way that AWS Database Migration Service does a bad job at creating the schema in the target database. However - there's a simple workaround:
1) Run mysqldump --no-data to get the exact schema from the source database.
2) Execute the dump'd schema on the target database.
3) Within your DMS task, under target table preparation mode, choose "Truncate" instead of "Drop tables on target". (https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.Creating.html)
With this in place, DMS doesn't create the schema on the target side, and things work pretty well (all existing data is loaded, and then ongoing changes are sync'd in near-real-time).
We've used this approach for minimal downtime cutovers a few times.
It took more than a while to figure out those few clicks.
I'm here initially as I too could not find them and yes I saw the exact quote on the AWS page you indicated saying that yes you could.
First you take a snapshot and then you restore it. In the process of restoring it you can select a serverless instance. (At least under SOME conditions. I do not think that a 5.7.12 (just confirmed actually) can be restored to a serverless configuration).
I suspect that 5.7.12 will happen in due time.
Right now the magic bullet is to start with a 5.6.10a version, take a snapshot and then restore that to a serveless instance.
For what it's worth after the long time:
Apparently Amazon Aurora Serverless is only compatible with MySQL 5.6 - this explains why 5.7 snapshots cannot be recovered.
So the two options are
downgrading the MySQL version to 5.6 first or
dumping and importing the data (after I read the other answers, I'd go for the second option).
Further reading:
https://aws.amazon.com/rds/aurora/serverless/?nc1=h_ls#How_to_Get_Started
When I did not get an answer in a few days, I did the conversion two ways with different results so I figured I would share my results here. I would still love to hear a better approach. (1) When I did the conversion using mysqldump and restore, with a short outage things were fine. (2) When I used AWS Database Migration Service it went pretty badly.
First, you have to get the binary log format as "ROW" and retention to 24 hours. That necessitated server restarts on my old clusters. Then when the data migration worked, I lost all my auto increments, then NULLness in my columns, the UNIQUE clauses and foreign keys in the new tables. Literally the only thing that migrated correctly was that the actual data and PRIMARY KEY indications. Also, I would recommend migrating one database at a time (i.e. schema) and don't try to migrate the mysql internal schemas. I said "migrate everything" and the migration tool tried to migrate the MySQL stuff - sheesh.
The one thing the AWS Database Migration Service did that was really cool was the migrate and monitor (made possible by the binary logging on the rows). You could watch it moving rows.
Just for the record, AWS amended the quoted documentation in mid-2022 by changing 'few clicks' to 'few steps'.🤣
You pay on a per-second basis for the database capacity that you use
when the database is active, and migrate between standard and
serverless configurations with a few steps in the Amazon Relational
Database Service (Amazon RDS) console.
Currently the documentation states that there are two (multi-step) methods that can be used to migrate from provisioned to serverless, and serverless to provisioned:
Snapshot restore.
Logical backup and restore.
Details here.

Laravel Restore a Backup

I'm fairly new to server administration. I have my Laravel app up and running and I want to make sure it has proper backups. I have researched some backup packages and I have settled on https://github.com/spatie/laravel-backup.
However, once the server fails, I need to know how to use the most recent backup (which will be on AWS S3) to restore the database on the rebuilt server. Are there any suggestions for guides on how to do this? I can't seem to find any unless it doesn't really require much learning and instead just a couple mySQL commands.
Thanks!
I would use replication and within Laravel i would try to switch connection to the replica database server so things can run smoothly until the problem is resolved.
Take a look at this Cross-Region Replication
A typical production environment is automatically running backups on most important things that your deployment needs in order to recover from a failure. Those parts would commonly be your database and storage folder, and configuration files.
Also when you deploy a laravel application there aren't many things that are "worth" backing up , you can choose the entire disk to be mirrored somewhere or you can schedule a backup script which run every N times and backups the things that are more important to your application.
Personally i wouldn't rely on an package from laravel to handle my backups , you can always use other backup utilities, replication and so on.
Update
Take a look at the link below:
User Guide » Amazon RDS DB Instance Lifecycle » Backing Up and Restoring
Backing Up and Restoring
You can call the API function RestoreDBInstanceFromDBSnapshot as showed on example.
But i don't think something automated exists that would auto restore or magically make everything work, you need to do a lot of security checks if something like that would even be attempted. Final word i believe a good solution manually entering or sending the request would be the most solid solution.

Minimizing downtime in SaaS-multi tenant with separate database model web application

We are having separate databases for each tenant which is creating a lot of downtime when we are deploying changes on cloud. The steps(in brief) what we follow whenever we have to deploy the changes on cloud are:
Put down the client site.
Take a snapshot of the current RDS instance(in case anything goes south).
Run the migration scripts(Changes) on each tenant database on RDS instance.
If everything goes well, then we make the client site live again.
Now the problem is, we are having around 250 tenants as of now and the 3rd step which is running the update script is taking too much time which in turn increases the downtime. Any suggestions on how to improve this process or if we are suppose to do it in some other way. There is a clear lack of enterprise level expertise here on our end, so any help will be appreciated. Thanks!
Without knowing anything about your application, here are some things to think about:
If your application would still have some value when running in a 'read-only' mode, you could limit the actual downtime by doing the following.
Make sure all of your RDS databases have a read-replica.
Set your application into 'read-only' mode (i.e. thru some application code).
Let your read replica catchup with your master
promote your read-replica to a stand-alone DB.
Run your updates against this copy of the database.
redirect your application to the new master.
create a new read replica from this new master
delete/archive your old database.
You still have to do all the work, and it still takes a while to run, but the actual downtime for the user should be minimal.

Resources