How can I use bash to handle sql backups? - bash

I have a database and I used to backup daily manually like so
mysqldump -uroot -ppassword forum > 4.25.2011.sql
However I've been doing the above and wanted to use a script besides mysqldumper to do the job.
If 3 existing .sql files exist in the backup directory, how can I delete the oldest one?
So far all I have is:
#!/bin/bash
today = `eval date +%m.%d.%Y` #how do I add this to the backup?
mysqldump -uroot -ppassword forum > /root/backups/4.25.2011.sql
I still can't figure out how to pass a variable to save my sql file as. How would I do that too?
My VPS is limited to 10gb and disk size is a concern or else I wouldn't delete any files.

For your first part, instead of
today = `eval date +%m.%d.%Y` #how do I add this to the backup?
mysqldump -uroot -ppassword forum > /root/backups/4.25.2011.sql
do
today=$(date +%m.%d.%Y)
mysqldump -uroot -ppassword forum > /root/backups/$today.sql
In particular:
there must not be any spaces around the equals sign
running eval is not what you want
The simplest way to only keep three files would be:
rm forum-3.sql
mv /root/backups/forum-2.sql /root/backups/forum-3.sql
mv /root/backups/forum-1.sql /root/backups/forum-2.sql
mysqldump -uroot -ppassword forum > /root/backups/forum-1.sql
Tools like ls -l or the content of the file should tell you the date if you need it.
If you really need the date in the file name, the easiest tool to help you is GNU date:
dateformat="%m.%d.%Y"
rm forum-$(date -d "-3 days" +$dateformat).sql
mysqldump -uroot -ppassword forum > /root/backups/forum-$(date +$dateformat).sql
Or use find, e.g.
find . -name "forum*.sql" -mtime +3 -delete
mysqldump -uroot -ppassword forum > /root/backups/forum-$(date +$dateformat).sql
After that, you could look at logrotate.

Since you're rotating the backups daily, you could just delete all of the ones older than 3 days. Here's also how to write to a filename based on the date:
#!/bin/bash
mysqldump -uroot -ppassword forum > /root/backups/`date +%m.%d.%Y`.sql
find /root/backups -type f -ctime +3 -exec rm {} \;

If you just want to delete the oldest file you can add the following to your script:
ls -U1 *.sql | head -1 | xargs rm

well i have dedicated sql server that is having all database for my remote apache servers on aws
now what i did is made x scripts for x databased i want to have total control over each sql withour messing things up,
now what you need to do is for backup db use this script name demosql.sh
#!/bin/bash
# Database credentials
user="database user"
password="db password"
host="localhost"
db_name="database name"
# Other options
backup_path="/location/of/folder/for/sql"
date=$(date +"%d-%b-%Y")
time2=$(date +"%R")
# Set default file permissions
umask 177
# Dump database into SQL file
mysqldump --user=$user --password=$password --host=$host $db_name > $backup_path/$db_name-$date-$time2.sql
this is how i made few scripts for each db to contr
#!/bin/bash
#project name db
source $(dirname $0)/demosql.sh
then immediate after backup i want to put all of my sql in folder to sync with s3 storage like this
#!/bin/bash
s3cmd sync --skip-existing /location/ s3://folder/
also you can control script like this on cronjob with remote servers
using ssh with bash

Related

Postgres database backup not working locally (Crontab + Shell script using expect)

I am having issues on my Ubuntu server: I have two scripts which perform a pg_dump of two databases (a remote and a local one). However the backup file for the local one always ends up empty.
When I run the script manually, no problem.
The issue is when the script is ran via crontab while I am NOT logged into the machine. If I'm in a SSH session no problem, it works with crontab but when I'm not connected it does not work.
Check out my full scripts/setup under, and feel free to suggest any improvements. For now I just want it to work but if my method is insecure/unefficient I would gladly hear about alternatives :)
So far I've tried:
Using the postgres user for the local database (instead of another user I use to access the DB with my applications)
Switch pg_dump for /usr/bin/pg_dump
Here's my setup:
Crontab entry:
0 2 * * * path/to/my/script/local_databasesBackup.sh ; path/to/my/script/remote_databasesBackup.sh
scriptInitialization.sh
set LOCAL_PWD "password_goes_here"
set REMOTE_PWD "password_goes_here"
Expect script, called by crontab (local/remote_databaseBackup.sh):
#!/usr/bin/expect -f
source path/to/my/script/scriptInitialization.sh
spawn path/to/my/script/localBackup.sh expect "Password: " send "${LOCAL_PWD}\r"
expect eof exit
Actual backup script (local/remoteBackup.sh):
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M")
delete_yesterday_backup_and_perform_backup () {
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
YESTERDAY_2_AM=$(date --date="02:00 yesterday" +"%Y-%m-%d_%H%M")
YESTERDAY_BACKUP_FILE=/path/to/local/backup/folder/$YESTERDAY_2_AM.tar
if [ -f "$YESTERDAY_BACKUP_FILE" ]; then
echo "$YESTERDAY_BACKUP_FILE exists. Deleting"
rm $YESTERDAY_BACKUP_FILE
else
echo "$YESTERDAY_BACKUP_FILE does not exist."
fi
}
CURRENT_DAY_NUMBER=$(date +"%d")
FIRST_DAY_OF_THE_MONTH="01"
if [ "$CURRENT_DAY_NUMBER" = "$FIRST_DAY_OF_THE_MONTH" ]; then
echo "First day of the month: Backup without deleting the previous backup"
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
else
echo "Not the first day of the month: Delete backup from yesterday and backup"
delete_yesterday_backup_and_perform_backup
fi
The only difference between my local and remote script is the pg_dump parameters:
Local looks like this /usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
Remote looks like this: pg_dump -U remote_account -p 5432 -h remote.address.com -W -F t remoteDatabase > /path/to/local/backup/folder/$DATE.tar
I ended up making two scripts because I thought it may have been the cause of the issue. However I'm pretty sure it's not at the moment.

Shell script to remove backups that are older than 2 Weeks

I have a shell script that takes the backup of the Mongo DB on a daily basis. It's working as expected. Now I need to remove the backups that are older than 2 weeks. Would that be achievable with the current naming convention. Can anyone shed some light? I'm fairly new to shell scripting
#!/bin/sh
DIR=`date +%m%d%y`
DEST=/dbBackups/$DIR
mkdir $DEST
mongodump --authenticationDatabase admin -h 127.0.0.1 -d pipe -u <username> -p <password>
Finally got it with the below script
#!/bin/sh
DIR=`date +%m%d%y`
DEST=/dbBackups/$DIR
mkdir $DEST
mongodump --authenticationDatabase admin -h 127.0.0.1 -d pipe -u <username> -p <password>
find /dbBackups/* -type d -ctime +14 -exec rm -rf {} +
Thanks to Shell script to delete directories older than n days

Permission Issue: Creating postgres back as postgres user

I have following postgres backup script, its a shell script and written to run ans postgres user.
But the problem is postgres user doesn't have permission to write these directories. I as a user don't have sudo on these machines but I have changed the directory to has 755 and added to one of the group that has major permission to do read-write-execute. Since postgres user isn't part of the unix user group I guess I am running into this issue.
My goal is to put this in the cron-tab but prior to that I need to get the script running with proper permission:
#!/bin/bash
# location to store backups
backup_dir="/location/to/dir"
# name of the backup file has the date
backup_date=`date +%d-%m-%Y`
# only keep the backup for 30 days (maintain low storage)
number_of_days=30
databases=`psql -l -t | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'`
for i in $databases; do
if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then
echo Dumping $i to $backup_dir$i\_$backup_date
pg_dump -Fc $i > $backup_dir$i\_$backup_date
fi
done
find $backup_dir -type f -prune -mtime +$number_of_days -exec rm -f {} \;
Before doing this be sure to login as a super user (sudo su) and try executing these:
useradd -G unix postgres (Add postgres user to unix group)
su postgres (Login as postgres user)
mkdir folder (Go to the directory where postgres needs to write files)
***From this line down is my answer to #find-missing-semicolon question
Just to illustrate an example with a shell script, you can capture the password using the read command and put it to a variable. Here I stored the password in password and echoed it afterwards. I hope this helps.
`#!/bin/bash`
`read -s -p "Password: " password`
`echo $password`

MySQLdump with arguments

Hello to professionals !
There was a good and simplest script idea to make mysqldump of every database - taken from
dump all mysql tables into separate files automagically?
author - https://stackoverflow.com/users/1274838/elias-torres-arroyo
with script as follows
#!/bin/bash
# Optional variables for a backup script
MYSQL_USER="root"
MYSQL_PASS="PASSWORD"
BACKUP_DIR="/backup/01sql/";
# Get the database list, exclude information_schema
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
do
# dump each database in a separate file
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" | gzip > "$BACKUP_DIR/$db.sql.gz"
done
sh
but the problem is that this script does not "understand" arguments like
--add-drop-database
to perform
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" --add-drop-database | gzip > "$BACKUP_DIR/$db.sql.gz"
Is there any idea how to force this script to understand the additional arguments listed under
mysqldump --help
because while all my tests shows it doesn't.
Thank you in advance for any hint to try !
--add-drop-database works only with --all-databases or --databases.
See please the reference in docs
So in your case mysqldump utility ignore mentioned parameter because you are going to dump one database.

Can I get a dump of all my databases *except one* using mysqldump?

I'm currently using mySQLdump to backup my dev machine and servers.
There is one project I just started, however, that has a HUUUUUGE database that I don't really need backed up, and i'll be a big problem to add it to the rest of the backup cycle.
I'm currently doing this:
"c:\Program Files\mysql\MySQL Server 5.1\bin\mysqldump" -u root -pxxxxxx --all-databases > g:\backups\MySQL\mysqlbackup.sql
Is it possible to somehow specify "except this database(s)"?
I wouldn't like to have to specify the list of DBs manually, since that would mean that I'd have to remember updating my backup batch file every time I create a new DB, and I know that's not gonna happen.
EDIT: As you probably guessed from my command line above, i'm doing this on Windows, so I can't do any kind of fancy bash stuff, only wimpy .bat things.
Alternatively, if you have other ideas to solve this same issue, they are more than welcome, of course!
mysql ... -N -e "show databases like '%';" |grep-v -F databaseidontwant |xargsmysqldump ... --databases > out.sql
echo 'show databases;' | mysql -uroot -proot | grep -v ^Database$ | grep -v ^information_schema$ | grep -v ^mysql$ | grep -v -F db1 | xargs mysqldump -uroot -proot --databases > all.sql
dumps all databases except: mysql, information_schema, mysql and db1.
Or if you'd like to review the list before dumping:
echo 'show databases;' | mysql -uroot -proot > databases.txt
edit databases.txt and remove any you don't want to dump
cat databases.txt | xargs mysqldump -uroot -proot --databases > all.sql
What about
--ignore-table=db_name.tbl_name
Do not dump the given table, which must be specified using both the database and table names. To ignore multiple tables, use this option multiple times.
Maybe you'll need to specify a few to completely ignore the big database.
I created the following one line solution avoiding multiple grep commands.
mysql -e "show databases;" | grep -Ev "Database|DatabaseToExclude1|DatabaseToExclude2" | xargs mysqldump --databases >mysql_dump_filename.sql
The -E in grep enables extended regex support which allowed to provide different matches separated by the pipe symbol "|". More options can be added to the mysqldump command. But only before the "--databases" parameter.
Little side note, i like to define the filename for the dump like this ...
... >mysql_dump_$(hostname)_$(date +%Y-%m-%d_%H-%M).sql
This will automatically ad the host name, date and time to the filename. :)
Seeing as your using Windows you should have PowerShell available to use.
Here is a short PowerShell script to get a list of all Databases, remove unwanted ones from the list & then use mysqldump to backup the others.
$MySQLPath = "."
$Hostname = "localhost"
$Username = "root"
$Password = ""
# Get list of Databases
$Databases = [System.Collections.Generic.List[String]] (
& $MySQLPath\mysql.exe -h"$Hostname" -u"$Username" -p"$Password" -B -N -e"show databases;"
)
# Remove databases from list we don't want
[void]$Databases.Remove("information_schema")
[void]$Databases.Remove("mysql")
# Dump database to .SQL file
& $MySQLPath\mysqldump.exe -h"$HostName" -u"$Username" -p"$Password" -B $($Databases) | Out-File "DBBackup.sql"
Create a backup user and only grant that user access to the databases that you want to backup.
You still need to remember to explicitly grant the privileges but that can be done in the database and doesn't require a file to be edited.
It took me a lot of finagling to come up with this but I've used it for a few years now and it works well...
mysql -hServerName -uUserName -pPassword -e "SELECT CONCAT('\nmysqldump -hServerName -uUserName -pPassword --set-gtid-purged=OFF --max_allowed_packet=2048M --single-transaction --add-drop-database --opt --routines --databases ',DBList,' | mysql -hServerName2 -uUserName2 -pPAssword2 ' ) AS Cmd FROM (SELECT GROUP_CONCAT(schema_name SEPARATOR ' ') AS DBList FROM information_schema.SCHEMATA WHERE LEFT(schema_name, 8) <> 'cclegacy' AND schema_name NOT IN ('mysql','information_schema','performance_schema','test','external','othertoskip')) a \G" | cmd
Instead of the pipe over to mysql where I'm moving from serverName to Servername2 you could redirect to a file but this allows me to tailor what I move. Sometimes i even OR the list so I can say LIKE 'Prefix%' etc.
You can use this one for production
It excludes 'performance_schema\|information_schema\|mysql\|sys'...modify for your needs
MYSQL_USER=
MYSQL_PASS=
MYSQL_HOST=
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS} -h${MYSQL_HOST}"
MYSQLDUMP_OPTIONS="--routines --triggers --single-transaction"
DBLIST=`mysql -s --host=$MYSQL_HOST --user=$MYSQL_USER --password=$MYSQL_PASS \
--execute="SHOW DATABASES;" | grep -v \
'performance_schema\|information_schema\|mysql\|sys' | awk '{printf("\"%s\" ",$0)}'`
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} --databases ${DBLIST} | gzip >all-dbs.sql.gz

Resources