I run the following script on my server:
#!/bin/bash
# System + MySQL backup script
# Full backup day - Fri (rest of the day do incremental backup)
# Copyright (c) 2005-2006 nixCraft <http://www.cyberciti.biz/fb/>
# This script is licensed under GNU GPL version 2.0 or above
# Automatically generated by http://bash.cyberciti.biz/backup/wizard-ftp-script.php
# ---------------------------------------------------------------------
### System Setup ###
DIRS="/var/www /etc"
BACKUP=/tmp/backup.$$
NOW=$(date +"%d-%m-%Y")
INCFILE="/root/tar-inc-backup.dat"
DAY=$(date +"%a")
FULLBACKUP="Fri"
### MySQL Setup ###
MUSER="xxx"
MPASS="xxx"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
GZIP="$(which gzip)"
### FTP server Setup ###
FTPD="/backup/incremental"
FTPU="xxx"
FTPP="xxx"
FTPS="xxx"
NCFTP="$(which ncftpput)"
### Other stuff ###
EMAILID="xxx"
### Start Backup for file system ###
[ ! -d $BACKUP ] && mkdir -p $BACKUP || :
### See if we want to make a full backup ###
if [ "$DAY" == "$FULLBACKUP" ]; then
FTPD="/backup/full"
FILE="fs-full-$NOW.tar.gz"
tar -zcvf $BACKUP/$FILE $DIRS
else
i=$(date +"%Hh%Mm%Ss")
FILE="fs-i-$NOW-$i.tar.gz"
tar -g $INCFILE -zcvf $BACKUP/$FILE $DIRS
fi
### Start MySQL Backup ###
# Get all databases name
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
FILE=$BACKUP/mysql-$db.$NOW-$(date +"%T").gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
### Dump backup using FTP ###
#Start FTP backup using ncftp
ncftp -u"$FTPU" -p"$FTPP" $FTPS<<EOF
mkdir $FTPD
mkdir $FTPD/$NOW
cd $FTPD/$NOW
lcd $BACKUP
mput *
quit
EOF
### Find out if ftp backup failed or not ###
if [ "$?" == "0" ]; then
rm -f $BACKUP/*
T=/tmp/backup.pass
echo "Date: $(date)">$T
echo "Hostname: $(hostname)" >>$T
echo "Backup passed" >>$T
mail -s "BACKUP PASSED" "$EMAILID" <$T
rm -f $T
else
T=/tmp/backup.fail
echo "Date: $(date)">$T
echo "Hostname: $(hostname)" >>$T
echo "Backup failed" >>$T
mail -s "BACKUP FAILED" "$EMAILID" <$T
rm -f $T
fi
Everything is working well, except that it always tells me everything went well, even tho the remote ftp server is full. Can someone help me changing this, so it gives me an error, when the remote server is full?
Also, is there any way to remove old backups?
Thanks,
Daniel
Related
I've been using cronic to silence emails from cron jobs when the job is successful. I'm trying to customize it so when a response code is 0 and the error output matches a string of "mount: /VessRAID/RH: /dev/sde1 already mounted on /VessRAID/RH.", to not send an email. Below is the script, then the contents of the email then my attempt at trying to suppress the email which is not working. Any idea what I may be doing wrong?
#!/bin/bash
# Cronic v3 - cron job report wrapper
# Copyright 2007-2016 Chuck Houpt. No rights reserved, whatsoever.
# Public Domain CC0: http://creativecommons.org/publicdomain/zero/1.0/
set -eu
TMP=$(mktemp -d)
OUT=$TMP/cronic.out
ERR=$TMP/cronic.err
TRACE=$TMP/cronic.trace
set +e
"$#" >$OUT 2>$TRACE
RESULT=$?
set -e
PATTERN="^${PS4:0:1}\\+${PS4:1}"
if grep -aq "$PATTERN" $TRACE
then
! grep -av "$PATTERN" $TRACE > $ERR
else
ERR=$TRACE
fi
if [ $RESULT -ne 0 -o -s "$ERR" ]
then
echo "Cronic detected failure or error output for the command:"
echo "$#"
echo
echo "RESULT CODE: $RESULT"
echo
echo "ERROR OUTPUT:"
cat "$ERR"
echo
echo "STANDARD OUTPUT:"
cat "$OUT"
if [ $TRACE != $ERR ]
then
echo
echo "TRACE-ERROR OUTPUT:"
cat "$TRACE"
fi
fi
rm -rf "$TMP"
Here is what the email notification looks like:
Cronic detected failure or error output for the command:
/usr/local/sbin/reg-backup-cronic.sh daily
RESULT CODE: 0
ERROR OUTPUT:
mount: /VessRAID/RH: /dev/sde1 already mounted on /VessRAID/RH.
STANDARD OUTPUT:
/dev/sde1 on /VessRAID/RH type ext4 (rw,relatime)
Here is my attempt at a wrapper script:
#!/bin/bash
/usr/local/sbin/reg-backup.sh $1
CODE=$?
err=$TRACE
if [[ $CODE -eq 0 && $err = "mount: /VessRAID/RH: /dev/sde1 already mounted on /VessRAID/RH." ]]
then
exit $CODE
fi
Alas the emails continue.
Hat tip to the creator of cronic, Chuck Houpt, for cluing me in to an answer, which was to look at the original script and why the error is happening. Case-sensitivity got the best of me:
if mount | grep Vessraid; then
echo starting $1 backup >> /var/log/vessraid.log
Notice the case in VessRAID should have been:
if mount | grep VessRAID; then
echo starting $1 backup >> /var/log/vessraid.log
Now emails only happen when there really is an error.
I'm trying to capture a live tcpdump from a custom designed linux system. The command I'm using so far is:
plink.exe -ssh user#IP -pw PW "shell nstcpdump.sh -s0 -U -w - not port 22 and not host 127.0.0.1" | "C:\Program Files\Wireshark\wireshark" -i -
This will fail since when executing the command on the remote system, this (custom) shell will output "Done" before sending data. I tried to find out a way to remove the "Done" message from the shell but doesn't appear to be any.
So I came up with this (added findstr -V):
plink.exe -ssh user#IP -pw PW "shell nstcpdump.sh -s0 -U -w - not port 22 and not host 127.0.0.1" | findstr -V "Done" | "C:\Program Files\Wireshark\wireshark" -i -
This works more or less fine since I will get some errors and the live capture will stop. I believe it might have something to do with the buffers, but I'm not sure.
Does anyone know of any other method of removing/bypassing the first bytes/chars of the output from plink/remote shell?
[edit]
as asked, nstcpdump.sh is a wrapper for tcpdump. as mentioned before, this system is highly customized. nstcpdump.sh code:
root#hostname# cat /netscaler/nstcpdump.sh
#!/bin/sh
# piping the packet trace to the tcpdump
#
# FILE: $Id: //depot/main/rs_120_56_14_RTM/usr.src/netscaler/scripts/nstcpdump.sh#1 $
# LAST CHECKIN: $Author: build $
# $DateTime: 2017/11/30 02:14:38 $
#
#
# Options: any TCPDUMP options
#
TCPDUMP_PIPE=/var/tmp/tcpdump_pipe
NETSCALER=${NETSCALER:-`cat /var/run/.NETSCALER`}
[ -r ${NETSCALER}/netscaler.conf ] && . ${NETSCALER}/netscaler.conf
TIME=${TIME:-3600}
MODE=${MODE:-6}
STARTCMD="start nstrace -size 0 -traceformat PCAP -merge ONTHEFLY -filetype PIPE -skipLocalSSH ENABLED"
STOPCMD="stop nstrace "
SHOWCMD="show nstrace "
NSCLI_FILE_EXEC=/netscaler/nscli
NSTRACE_OUT_FILE=/tmp/nstrace.out
NS_STARTTRACE_PIDFILE=/tmp/nstcpdump.pid
TRACESTATE=$(nsapimgr -d allvariables | grep tracestate | awk '{ print $2}')
trap nstcpdump_exit 1 2 15
nstcpdump_init()
{
echo "##### WARNING #####"
echo "This command has been deprecated."
echo "Please use 'start nstrace...' command from CLI to capture nstrace."
echo "trace will now start with all default options"
echo "###################"
if [ ! -d $NSTRACE_DIR ]
then
echo "$NSTRACE_DIR directory doesn't exist."
echo "Possible reason: partition is not mounted."
echo "Check partitions using mount program and try again."
exit 1
fi
if [ ! -x $NSCLI_FILE_EXEC ]
then
echo "$NSCLI_FILE_EXEC binary doesn't exist"
exit 1
fi
if [ -e $NSTRACE_OUT_FILE ]
then
rm $NSTRACE_OUT_FILE
echo "" >> $NSTRACE_OUT_FILE
fi
}
nstcpdump_start_petrace()
{
sleep 0.5;
$NSCLI_FILE_EXEC -U %%:.:. $STARTCMD >/tmp/nstcpdump.sh.out
rm -f ${NS_STARTTRACE_PIDFILE}
}
nstcpdump_start()
{
# exit if trace is already running
if [ $TRACESTATE -ne 0 ]
then
echo "Error: one instance of nstrace is already running"
exit 2
fi
nstcpdump_start_petrace &
echo $! > ${NS_STARTTRACE_PIDFILE}
tcpdump -n -r - $TCPDUMPOPTIONS < ${TCPDUMP_PIPE}
nstcpdump_exit
exit 1
}
nstcpdump_exit()
{
if [ -f ${NS_STARTTRACE_PIDFILE} ]
then
kill `cat ${NS_STARTTRACE_PIDFILE}`
rm ${NS_STARTTRACE_PIDFILE}
fi
$NSCLI_FILE_EXEC -U %%:.:. $STOPCMD >> /dev/null
exit 1
}
nstcpdump_usage()
{
echo `basename $0`: utility to view/save/sniff LIVE packet capture on NETSCALER box
tcpdump -h
echo
echo NOTE: tcpdump options -i, -r and -F are NOT SUPPORTED by this utility
exit 0
}
########################################################################
while [ $# -gt 0 ]
do
case "$1" in
-h )
nstcpdump_usage
;;
-i )
nstcpdump_usage
;;
-r )
nstcpdump_usage
;;
-F )
nstcpdump_usage
;;
esac
break;
done
TCPDUMPOPTIONS="$#"
check_ns nstcpdump
#nstcpdump_init
#set -e
if [ ! -e ${TCPDUMP_PIPE} ]
then
mkfifo $TCPDUMP_PIPE
if [ $? -ne 0 ]
then
echo "Failed creating pipe [$TCPDUMP_PIPE]"
exit 1;
fi
fi
nstcpdump_start
Regards
This issue is currently driving me nuts.
I setup a crontab with sudo crontab -e
The contents are 1 * * * * /home/bolte/bin/touchtest.sh
The contents of that file are:
#!/bin/bash
touch /home/bolte/bin/test.log
It creates the file. But the below script will not run.
#!/bin/bash
# CHANGE THESE
auth_email="11111111#live.co.uk"
auth_key="11111111111111111" # found in cloudflare
account settings
zone_name="11111.io"
record_name="11111.bolte.io"
# MAYBE CHANGE THESE
ip=$(curl -s http://ipv4.icanhazip.com)
ip_file="/home/bolte/ip.txt"
id_file="/home/bolte/cloudflare.ids"
log_file="/home/bolte/cloudflare.log"
# LOGGER
log() {
if [ "$1" ]; then
echo -e "[$(date)] - $1" >> $log_file
fi
}
# SCRIPT START
log "Check Initiated"
if [ -f $ip_file ]; then
old_ip=$(cat $ip_file)
if [ $ip == $old_ip ]; then
echo "IP has not changed."
exit 0
fi
fi
if [ -f $id_file ] && [ $(wc -l $id_file | cut -d " " -f 1) == 2 ]; then
zone_identifier=$(head -1 $id_file)
record_identifier=$(tail -1 $id_file)
else
zone_identifier=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones?name=$zone_name" -H "X-Auth-E$
record_identifier=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$zone_identifier/dns_record$
echo "$zone_identifier" > $id_file
echo "$record_identifier" >> $id_file
fi
update=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$zone_identifier/dns_records/$record_ident$
[ Read 55 lines (Warning: No write permission) ]
^G Get Help ^O Write Out ^W Where Is ^K Cut Text ^J Justify ^C Cur Pos ^Y Prev Page
^X Exit ^R Read File ^\ Replace ^U Uncut Text ^T To Linter ^_ Go To Line ^V Next Page
I've been trying to troubleshoot why this code will not run every minute, there doesn't seem to be any output in the same folder as the script, which is located at /home/bolte/cloudflare-update-record.sh
Ok so the answer to this was, I was editing crontab with sudo, and the files were located in my users home folder. This is why they weren't working. Resolved my own issue.
If you have this issue just use $ crontab -e rather than sudo crontab -e, and specify full paths for your file outputs, unless you are putting the proper path variables in your script.
I am coding a script to select some IPs from a table in a DB, and then use IPTables rules to ban theses IPs, last step is to notify by e-mail, but I am getting 2 errors:
#!/bin/bash
Now=$(date +"%d-%m-%Y %T")
fileLocation="/var/lib/mysql/DBName/"
fileName="ip2ban.txt"
filePath=$fileLocation$fileName
curLocation=$(pwd)
#Connect to DB and select ban_ip
mysql -u root -pPASSWORD -D DBName -e 'SELECT ip INTO OUTFILE "'$filePath'" FROM ban_ip WHERE ip_tables = "0"' >> banIP.log 2>&1
selRes=$?
# If the command was successful
if [ $selRes -eq "0" ]
then
# We need to check if the file exists on the saved location
#find $fileLocation -type f -iname "ip2ban.txt" -empty => To check if file empty or not
if [ -f $filePath ]
then
mv $filePath $curLocation'/ip2ban.txt'
# Connect to DB and update the ban_ip
mysql -u root -pPASSWORD -D DBName -e 'UPDATE ban_ip SET ip_tables = "1" WHERE ip_tables = "0"' >> banIP.log 2>&1
upRes=$?
if [ $upRes -eq "0" ]
then
# Send message for succesful result
echo -e "Database updated with new banned IPs on $Now \nThank you for using this script" 2>&1 | sed '1!b;s/^/To: myID#gmail.com\nSubject: New banned IPs[Success]\n\n/' | sendmail -t
else
# Send message for failure result
echo -e "We cannot update the ban_ip table on $Now \nThank you for using this script" 2>&1 | sed '1!b;s/^/To: myID#gmail.com\nSubject: [Failure] New banned IPs\n\n/' | sendmail -t
fi
fi
else
echo 'Something wrong with Select statment on' $Now >> banIP.log
fi
# Save IPTables rules
iptables-save > /root/Scripts/IPTables/BannedIPs.conf // LIGNE 53
I am getting 2 errors:
line 53: iptables-save: command not found
line 37: sendmail: command not found
However the sendamil is already installed, with mail, postfix:
# which sendmail
/usr/sbin/sendmail
# which mail
/usr/bin/mail
# which postfix
/usr/sbin/postfix
Thanks for your usual support
According to the crontab(5) man page for Linux:
PATH is set to "/usr/bin:/bin".
Meaning your shell script will not be able to find anything under /usr/sbin or /sbin. Change this with by adding the following near the top of your script:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
Also the environment can be set from within the crontab. See https://stackoverflow.com/a/14694543/5766144 for how to do this.
I have a file named backup.sh, I have installed and set path variable too. but when i double click backup.sh file its saying something which I don't know. scenario is back mongo db database backup using script in grails.
#!/bin/bash
#
# Michael Mottola
# <mikemottola#gmail.com>
# December 18, 2011
#
# Creates backup files (bson) of all MongoDb databases on a given server.
# Default behaviour dumps the mongo database and tars the output into a file
# named after the current date. ex: 2011-12-19.tar.gz
#
### Set server settings
HOST="localhost"
PORT="27017" # default mongoDb port is 27017
USERNAME=""
PASSWORD=""
# Set where database backups will be stored
# keyword DATE gets replaced by the current date, you can use it in either path below
BACKUP_PATH="Desktop/path/to/backup/directory" # do not include trailing slash
FILE_NAME="DATE" #defaults to [currentdate].tar.gz ex: 2011-12-19.tar.gz
##################################################################################
# Should not have to edit below this line unless you require special functionality
# or wish to make improvements to the script
##################################################################################
# Auto detect unix bin paths, enter these manually if script fails to auto detect
MONGO_DUMP_BIN_PATH="$(which mongodump)"
TAR_BIN_PATH="$(which tar)"
# Get todays date to use in filename of backup output
TODAYS_DATE=`date "+%Y-%m-%d"`
# replace DATE with todays date in the backup path
BACKUP_PATH="${BACKUP_PATH//DATE/$TODAYS_DATE}"
# Create BACKUP_PATH directory if it does not exist
[ ! -d $BACKUP_PATH ] && mkdir -p $BACKUP_PATH || :
# Ensure directory exists before dumping to it
if [ -d "$BACKUP_PATH" ]; then
cd $BACKUP_PATH
# initialize temp backup directory
TMP_BACKUP_DIR="mongodb-$TODAYS_DATE"
echo; echo "=> Backing up Mongo Server: $HOST:$PORT"; echo -n ' ';
# run dump on mongoDB
if [ "$USERNAME" != "" -a "$PASSWORD" != "" ]; then
$MONGO_DUMP_BIN_PATH --host $HOST:$PORT -u $USERNAME -p $PASSWORD --out $TMP_BACKUP_DIR >> /dev/null
else
$MONGO_DUMP_BIN_PATH --host $HOST:$PORT --out $TMP_BACKUP_DIR >> /dev/null
fi
# check to see if mongoDb was dumped correctly
if [ -d "$TMP_BACKUP_DIR" ]; then
# if file name is set to nothing then make it todays date
if [ "$FILE_NAME" == "" ]; then
FILE_NAME="$TODAYS_DATE"
fi
# replace DATE with todays date in the filename
FILE_NAME="${FILE_NAME//DATE/$TODAYS_DATE}"
# turn dumped files into a single tar file
$TAR_BIN_PATH --remove-files -czf $FILE_NAME.tar.gz $TMP_BACKUP_DIR >> /dev/null
# verify that the file was created
if [ -f "$FILE_NAME.tar.gz" ]; then
echo "=> Success: `du -sh $FILE_NAME.tar.gz`"; echo;
# forcely remove if files still exist and tar was made successfully
# this is done because the --remove-files flag on tar does not always work
if [ -d "$BACKUP_PATH/$TMP_BACKUP_DIR" ]; then
rm -rf "$BACKUP_PATH/$TMP_BACKUP_DIR"
fi
else
echo "!!!=> Failed to create backup file: $BACKUP_PATH/$FILE_NAME.tar.gz"; echo;
fi
else
echo; echo "!!!=> Failed to backup mongoDB"; echo;
fi
else
echo "!!!=> Failed to create backup path: $BACKUP_PATH"
fi
its giving --host command not found. updated for later help.
bash FilePath/backup.sh
For example if you have backup.sh file in E:/SohamShetty/Files then you should do like this.
bash E:/SohamShetty/Files/backup.sh
You can run it as :
bash backup.sh
or
chmod +x backup.sh
./backup.sh