Shell script to remove backups that are older than 2 Weeks - shell

I have a shell script that takes the backup of the Mongo DB on a daily basis. It's working as expected. Now I need to remove the backups that are older than 2 weeks. Would that be achievable with the current naming convention. Can anyone shed some light? I'm fairly new to shell scripting
#!/bin/sh
DIR=`date +%m%d%y`
DEST=/dbBackups/$DIR
mkdir $DEST
mongodump --authenticationDatabase admin -h 127.0.0.1 -d pipe -u <username> -p <password>

Finally got it with the below script
#!/bin/sh
DIR=`date +%m%d%y`
DEST=/dbBackups/$DIR
mkdir $DEST
mongodump --authenticationDatabase admin -h 127.0.0.1 -d pipe -u <username> -p <password>
find /dbBackups/* -type d -ctime +14 -exec rm -rf {} +
Thanks to Shell script to delete directories older than n days

Related

How to delete contents of user home dir safely via bash

I am writing a bash script to do a account restore. The contents of the home dir is zipped up using this command.
sudo sh -c "cd /home/$username; zip -0 -FS -r -b /tmp /home/0-backup/users/$username.zip ."
This works as expected.
If the user requests a restore of their data, I am doing the following
sudo sh -c "cd /home/$username; rm -rf *"
Then
sudo -u $username unzip /home/0-backup/users/$username.zip -d /home/$username/
This works as expected.
However you can see the flaw in the delete statement, if the username is not set. We delete all users home dir. I have if statements that do the checking to make sure the username is there. I am looking for some advice on a better way to handle resetting the users account data that isn't so dangerous.
One thought I had was to delete the user account and then recreate it. Then do the restore. I think that this would be less risky. I am open to any suggestions.
Check the parameters first.
Then use && after cd so that it won't execute rm if the cd fails.
if [ -n "$username" ] && [ -d "/home/$username" ]
then
sudo sh -c "cd '/home/$username' && rm -rf * .[^.]*"
fi
I added .[^.]* in the rm command so it will delete dot-files as well. [^.] is needed to prevent it from deleting . (the user's directory) and .. (the /home directory).

Bash cron job on hpanel not locating directory

I have the following code on cron job, it runs but the code does not really do what it supposed to. It does not create the directory plus is does not do anything in the code. Please help check if the way I pointed to the directory is wrong.
#!/bin/bash
NAMEDATE=`date +%F_%H-%M`_`whoami`
NAMEDATE2=`date `
mkdir ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
mysqldump -u u3811*****_boss -p"*******" u3811*****_data | gzip ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz
echo "This is the database backup for website.com on $NAMEDATE2" |
mailx -a ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz -s "website.com Database attached" -- mail#gmail.com
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/*
exit 0
Your NAMEDATE variable needs to be modified a bit, as shown below, for more information about variables in bash see this link
NAMEDATE=$(date +%F_%H-%M"_"$(whoami))
When you issue the mkdir command you will need to pass the -p option to create the complete directory structure if it doesn't exists.
mkdir -p ~/home/u3811numbers/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
Also, the ~ character on Linux based distributions is used as a shortcut for the home directory of the user that invokes it so, in the line below the result is /home//home/u3811*****/domains/website.com/public_html/cron/backup/files/2020-09-04_23-13_ you can read more about it in here
In you last command before the exit, you might need to pass a wildcard (*) to avoid removing the executable bit on the directory, see below
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/
The final version of your script will look something like this.
#!/bin/bash
NAMEDATE=$(date +%F_%H-%M"_"$(whoami))
NAMEDATE2=date
mkdir -p ~/home/u3811******/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
mysqldump -u u3811*****_boss -p"******" u3811*****_data | gzip > ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz
echo "This is the database backup for website.com on $NAMEDATE2" | mailx -a ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz -s "website.com Database attached" -- mail#gmail.com
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/*
To debug a bash script you can always pass the -x flag for more information take a look at this article

Adding shell if statement inside lftp

I'm trying to use SFTP to copy some files from one server to another, this task should run everyweek. The script I use :
HOST='sftp://my.server.com'
USER='user1'
PASSWORD='passwd'
DIR=$HOSTNAME
REMOTE_DIR='/home/remote'
LOCAL_DIR='/home/local'
# LFTP via SFTP connexion
lftp -u "$USER","$PASSWORD" $HOST <<EOF
# changing directory
cd "$REMOTE_DIR"
$(if [ ! -d "$DIR" ]; then
mkdir $DIR
fi)
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
My issue is that put is executed without taking in consideration the result of if statment.
PS : The error message I got is the following :
put: Access failed: No such file (/home/backups/myhost/upload.txt)
LFTP has no if statement!
What you are doing here?
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
$(if [ ! -d "$DIR" ]; then
mkdir $DIR
fi)
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
You call a sub command in a here document. The sub command is executed locally before lftp is started and its output is pasted in the here document, which gets passed to lftp. This works just, because mkdir has no output. You do not call mkdir on the ftp server. You call the mkdir of your local shell. Effectively it is the same as if you put the if statement before the lftp execution.
if [ ! -d "$DIR" ]; then
mkdir $DIR
fi
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
What you are trying to do, does not work. You have to think about a different solution.
Right now I have no FTP server to test it, but it might be possible to use the -f option of LFTP's mkdir. I assume that it may work like the -f option of the Unix rm command. Try this:
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
mkdir -f "$DIR"
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
Update: It works as supposed. The creation of a directory, which exist already, throws no error, if you use the option -f:
lftp anonymous#localhost:/pub> mkdir -f dir
mkdir ok, `dir' created
lftp anonymous#localhost:/pub> mkdir -f dir
lftp anonymous#localhost:/pub> ls
drwx------ 2 116 122 4096 Aug 10 12:04 dir
Maybe you lftp client is outdated. I tested it with Debian 9.

Permission Issue: Creating postgres back as postgres user

I have following postgres backup script, its a shell script and written to run ans postgres user.
But the problem is postgres user doesn't have permission to write these directories. I as a user don't have sudo on these machines but I have changed the directory to has 755 and added to one of the group that has major permission to do read-write-execute. Since postgres user isn't part of the unix user group I guess I am running into this issue.
My goal is to put this in the cron-tab but prior to that I need to get the script running with proper permission:
#!/bin/bash
# location to store backups
backup_dir="/location/to/dir"
# name of the backup file has the date
backup_date=`date +%d-%m-%Y`
# only keep the backup for 30 days (maintain low storage)
number_of_days=30
databases=`psql -l -t | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'`
for i in $databases; do
if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then
echo Dumping $i to $backup_dir$i\_$backup_date
pg_dump -Fc $i > $backup_dir$i\_$backup_date
fi
done
find $backup_dir -type f -prune -mtime +$number_of_days -exec rm -f {} \;
Before doing this be sure to login as a super user (sudo su) and try executing these:
useradd -G unix postgres (Add postgres user to unix group)
su postgres (Login as postgres user)
mkdir folder (Go to the directory where postgres needs to write files)
***From this line down is my answer to #find-missing-semicolon question
Just to illustrate an example with a shell script, you can capture the password using the read command and put it to a variable. Here I stored the password in password and echoed it afterwards. I hope this helps.
`#!/bin/bash`
`read -s -p "Password: " password`
`echo $password`

How can I use bash to handle sql backups?

I have a database and I used to backup daily manually like so
mysqldump -uroot -ppassword forum > 4.25.2011.sql
However I've been doing the above and wanted to use a script besides mysqldumper to do the job.
If 3 existing .sql files exist in the backup directory, how can I delete the oldest one?
So far all I have is:
#!/bin/bash
today = `eval date +%m.%d.%Y` #how do I add this to the backup?
mysqldump -uroot -ppassword forum > /root/backups/4.25.2011.sql
I still can't figure out how to pass a variable to save my sql file as. How would I do that too?
My VPS is limited to 10gb and disk size is a concern or else I wouldn't delete any files.
For your first part, instead of
today = `eval date +%m.%d.%Y` #how do I add this to the backup?
mysqldump -uroot -ppassword forum > /root/backups/4.25.2011.sql
do
today=$(date +%m.%d.%Y)
mysqldump -uroot -ppassword forum > /root/backups/$today.sql
In particular:
there must not be any spaces around the equals sign
running eval is not what you want
The simplest way to only keep three files would be:
rm forum-3.sql
mv /root/backups/forum-2.sql /root/backups/forum-3.sql
mv /root/backups/forum-1.sql /root/backups/forum-2.sql
mysqldump -uroot -ppassword forum > /root/backups/forum-1.sql
Tools like ls -l or the content of the file should tell you the date if you need it.
If you really need the date in the file name, the easiest tool to help you is GNU date:
dateformat="%m.%d.%Y"
rm forum-$(date -d "-3 days" +$dateformat).sql
mysqldump -uroot -ppassword forum > /root/backups/forum-$(date +$dateformat).sql
Or use find, e.g.
find . -name "forum*.sql" -mtime +3 -delete
mysqldump -uroot -ppassword forum > /root/backups/forum-$(date +$dateformat).sql
After that, you could look at logrotate.
Since you're rotating the backups daily, you could just delete all of the ones older than 3 days. Here's also how to write to a filename based on the date:
#!/bin/bash
mysqldump -uroot -ppassword forum > /root/backups/`date +%m.%d.%Y`.sql
find /root/backups -type f -ctime +3 -exec rm {} \;
If you just want to delete the oldest file you can add the following to your script:
ls -U1 *.sql | head -1 | xargs rm
well i have dedicated sql server that is having all database for my remote apache servers on aws
now what i did is made x scripts for x databased i want to have total control over each sql withour messing things up,
now what you need to do is for backup db use this script name demosql.sh
#!/bin/bash
# Database credentials
user="database user"
password="db password"
host="localhost"
db_name="database name"
# Other options
backup_path="/location/of/folder/for/sql"
date=$(date +"%d-%b-%Y")
time2=$(date +"%R")
# Set default file permissions
umask 177
# Dump database into SQL file
mysqldump --user=$user --password=$password --host=$host $db_name > $backup_path/$db_name-$date-$time2.sql
this is how i made few scripts for each db to contr
#!/bin/bash
#project name db
source $(dirname $0)/demosql.sh
then immediate after backup i want to put all of my sql in folder to sync with s3 storage like this
#!/bin/bash
s3cmd sync --skip-existing /location/ s3://folder/
also you can control script like this on cronjob with remote servers
using ssh with bash

Resources