I am a new at this so please be patiant.
I have a script that I did not write (not that advance, maybe one day) and I need it to output logs in var/logs.
Can anyone please help wiht it please?
I need to know that all the actions are completed in that script.
Here is the script:
#!/bin/bash
# copy the old database to a timestamped copy
TIMESTAMP=$(date +%d)
REPORT_EMAIL=user#domain.com
# backup Spamassassin bayes db
sa-learn -p /usr/mailcleaner/etc/mailscanner/spam.assassin.prefs.conf --siteconfigpath /usr/mailcleaner/share/spamassassin --backup >/var/mailcleaner/spool/spamassassin/spamass_rules.bak
# backup Bogofilter bayes db
cp -a /root/.bogofilter/wordlist.db "/root/.bogofilter/$TIMESTAMP-wordlist.db"
if [ -f "/root/.bogofilter/$TIMESTAMP-wordlist.db.gz" ]
then
rm -f "/root/.bogofilter/$TIMESTAMP-wordlist.db.gz"
fi
gzip "/root/.bogofilter/$TIMESTAMP-wordlist.db"
# get the spam and ham from the imap mailbox for the spamassassin and bogofilter db's
/opt/mailcleaner/scripts/imap-sa-learn.pl
if [ $? -ne 0 ]
then
(
echo "Subject: Bogofilter database update $(hostname) failed"
ls -l /var/mailcleaner/spool/bogofilter/database/
) | /usr/sbin/sendmail $REPORT_EMAIL
exit 1
fi
# copy the database to the right location
cp /root/.bogofilter/wordlist.db /var/mailcleaner/spool/bogofilter/database/wordlist.db
# If slave(s) Mailcleaner exists, ssh copy dbs to the slave(s)
#scp /root/.bogofilter/wordlist.db mailcleaner2.domain.com:/var/mailcleaner/spool/bogofilter/database/
#scp /var/mailcleaner/spool/spamassassin/spamass_rules.bak mailcleaner2.domain.com:/var/mailcleaner/spool/spamassassin/
# get the spam and ham counts from bogofilter - this just prints how many spam and ham you collected so far...
/opt/bogofilter/bin/bogoutil -w /var/mailcleaner/spool/bogofilter/database/wordlist.db .MSG_COUNT
Again thanks for the help.
Raj
It's easiest just to use output redirection when you run the script:
./the_script.sh > /var/log/the_script.log 2>&1
The 2>&1 is to capture stderr (the error stream) as well as stdout (the output stream).
If you want to write a log file and see the output on the console, use tee:
./the_script.sh 2>&1 | tee /var/log/the_script.log
Related
My aim is to create a shell script such that it logins and filter the list of files available and select a file to get. Here I need to run commands like in bash.
My sample code is:
sshpass -p password sftp user#10.10.10.10 <<EOF
cd /home/
var=$(ls -rt)
echo $var
echo "select a folder"
read folder
cd $folder
filen=&(ls -rt)
echo $filen
echo "select a file"
read name
get $name
bye
EOF
The above approach will not work. Remember that the 'here document' (<<EOF ... EOF) is evaluate as input to the sftp session. Prompts will be displayed, and user input will be requested BEFORE any output (ls in this case) will be available from sftp.
Consider using lftp, which has more flexible construct. In particular, it will let you use variables, create command dynamically, etc.
lftp sftp://user#host <<EOF
cd /home
ls
echo "Select Folder"
shell 'read folder ; echo "cd $folder" >> temp-cmd'
source temp-cmd
ls
echo "Select Folder"
shell 'read file ; echo "get $file" >> temp-cmd'
source temp-cmd
EOF
In theory, you can create similar constructs with pipes and sftp (may be a co-process ?), but this is much harder.
Of course, the other alternative is to create different sftp sessions for listing, but this will be expensive/inefficient.
After some research and experimentation, found a way to create batch/interactive sessions with sftp. Posting as separate answer, as I still believe the easier way to go is with lftp (see other answer). Might be used on system without lftp
The initial exec create FD#3 - pointing to the original stdout - probably user terminal. Anything send to stdout will be executed by the sftp in the pipeline.
The pipe is required to allow both process to run concurrently. Using here doc will result in sequential execution. The sleep statement are required to allow SFTP to complete data retrieval from remote host.
exec 3>&1
(
echo "cd /home/"
echo "ls"
sleep 3 # Allow time for sftp
echo "select a folder" >&3
read folder
echo "cd $folder"
echo "ls"
sleep 3 # Allow time for sftp
echo "select a file" >&3
read name
echo "get $name"
echo "bye"
) | sshpass -p password sftp user#10.10.10.10
I would suggest you to create a file with pattern of the files you want downloaded and then you can get files downloaded in one single line:
sftp_connection_string <<< $"ls -lrt"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e 's/^/get -P /g'|sftp_connection_string
if there are multiple definite folders to be looked into, then:
**Script version**
for fldr in folder1 folder2 folder3;do
sftp_connection_string <<< $"ls -lrt ${fldr}/"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e "s/^/get -P ${fldr}/g"|sftp_connection_string
done
One-liner
for fldr in folder1 folder2 folder3;do sftp_connection_string <<< $"ls -lrt ${fldr}/"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e "s/^/get -P ${fldr}\//g"|sftp_connection_string;done
let me know if it works.
I would like to ask you for a help. I'm a scripting noob and I need to make a script for backing up our webserver via FTPS. I made this script, but it only makes the backup file, but it does not upload that file to the FTPS server. But when I run that lftp command alone, it works. I'm looking at this for quite a long time, but I can't find out, why it's not working... Could someone help please? Thank you!
#!/bin/bash
# SETTINGS
RMDATE=$(date --iso -d '10 days ago').tar
FTPUSER=user
FTPPW=pass
FTPSERVER=my.server.com
LFTP=/usr/bin/lftp
# DELETE OLD BACKUPS
rmold () {
$LFTP << EOF
open ${FTPUSER}:${FTPPW}#${FTPSERVER}
rm -rf ${RMDATE}
bye
EOF
echo "Done."
}
# PLESK BACKUP
if /usr/local/psa/bin/pleskbackup server -v --exclude-logs >/tmp/backup-plesk.log 2>&1 --output-file=/var/www/bak/`date -I`.tar; then
if lftp -c "open ${FTPUSER}:${FTPPW}#${FTPSERVER}; put /var/www/bak/`date -I`.tar"; then
rm -f /var/www/bak/`date -I`.tar
/usr/bin/sendEmail <<< some parameters >>> # backup success message
rmold
else
/usr/bin/sendEmail <<< some parameters >>> # upload error message
exit 1
fi
else
/usr/bin/sendEmail <<< some parameters >>> # backup error message
exit 1
fi
I had a problem with FTP FEAT and certificate. I disabled FEAT in lftp config file and I specified full path to the certificate. It works now.
I have a script I am trying to work out to scan my LAN and send me notification if there is a new MAC address that does not appear in my master list. I believe my variables may be messed up. This is what I have:
#!/bin/bash
LIST=$HOME/maclist.log
MASTERFILE=$HOME/master
FILEDIFF="$(diff $LIST $MASTERFILE)"
# backup the maclist first
if [ -f $LIST ]; then
cp $LIST maclist_`date +%Y%m%H%M`.log.bk
else
touch $LIST
fi
# this will scan the network and extract the IP and MAC address
nmap -n -sP 192.168.122.0/24 | awk '/^Nmap scan/{IP=$5};/^MAC/{print IP,$3};{next}' > $LIST
# this will use a diff command to compare the maclist created above and master list of known good devices on the LAN
if [ $FILEDIFF ] 2> /dev/null; then
echo
echo "---- All is well on `date` ----" >> macscan.log
echo
else
# echo -e "\nWARNING!!" | `mutt -e 'my_hdr From:user#email.com' -s "WARNIG!! NEW DEVICE ON THE LAN" -i maclist.log user#email.com`
echo "emailing you"
fi
When I execute this when the maclist.log does not exist I get this response:
diff: /root/maclist.log: No such file or directory
If I execute it again with the maclist.log file existing the file gets renamed from the cp line without any issue.
The line
FILEDIFF="$(diff $LIST $MASTERFILE)"
executes the diff when it is run (not when you use $FILELIST later). At that time the list file hasn't been created.
The easiest fix is just to put the diff command in full where $FILELIST is currently used.
The OS is centos 7, I have a small application to implement below functionality:
1.Read information from config.ini like this:
# Configuration file for ftpxml service
# Remote FTP server informations
ftpadress=1.2.3.4
username=test
password=test
# Local folders configuration
# folderA: folder for incomming files
folderA=/u02/dump
# folderB: Successfuly transfered files are copied here
folderB=/u02/dump_bak
# retrydir: when ftp upload fails, store failed files in this
# directory
retrydir=/u02/dump_retry
Monitor folder A. If there are any newly-added files in A, do step 3.
Ftp these new files to a remote ftp server in the order of their creation time, While upload finished, copy uploaded files to folder B and delete relevant files in folder A.
If ftp fails, store relevant files in retrydir and try to ftp them later.
Record every operation in a log file.
Detailed setting instruction for the application:
install ncftp package: yum install ncftp -y, it's not a service nor a daemon, just a client tool which is invoked in bash file for ftp purpose.
Customize these files to suit your setting using vi: config.ini
,ftpmon.path and ftpmon.service
copy ftpmon.path and ftpmon.service to /etc/systemd/system/, copy config.ini and ftpxml.sh to /u02/ftpxml/, run: chmod +x ftpxml.sh
Start the monitoring tool
sudo systemctl start ftpmon.path
If you want to enable it at boot time just enter: sudo systemctl enable ftpmon.path
Setup a cron task to purge queued files (add option -p)
*/5 * * * * /u02/ftpxml/ftpxml.sh -p
Now the application seems works well, except a special situation:
When we put several files in folder A continuously, for instance, put 1.txt, 2.txt and 3.txt...... one after another in a short time, we usually found 1.txt ftp well, but the upcoming files fails to ftp and still stay under folder A.
Now I am going to fix this problem. I suppose the error maybe due to: while doing ftp for the first file, maybe the second file is already created under folder A. so the code can't care about the second file.
Below is code of ftpxml.sh:
#!/bin/bash
# ! Please read the README.txt file !
# Copy files from upload dir to remote FTP then save them
# to folderB
# look our location
SCRIPT=$(readlink -f $0)
# Absolute path to this script
SCRIPTPATH=`dirname $SCRIPT`
PIDFILE=${SCRIPTPATH}/ftpmon_prog.lock
# load config.ini
if [ -f $SCRIPTPATH/config.ini ]; then
source $SCRIPTPATH/config.ini
else
echo "No config found. Exiting"
fi
# Lock to avoid multiple instances
if [ -f $PIDFILE ]; then
kill -0 $(cat $PIDFILE) 2> /dev/null
if [ $? == 0 ]; then
exit
fi
fi
# Store PID in lock file
echo $$ > $PIDFILE
# Parse cmdline arguments
while getopts ":ph" opt; do
case $opt in
p)
#we set the purge mode (cron mode)
purge_only=1
;;
\?|h)
echo "Help text"
exit 1
;;
esac
done
# declare usefull functions
# common logging function
function logmsg() {
LOGFILE=ftp_upload_`date +%Y%m%d`.log
echo $(date +%m-%d-%Y\ %H:%M:%S) $* >> $SCRIPTPATH/log/${LOGFILE}
}
# Upload to remote FTP
# we use ncftpput to batch silently
# $1 file to upload $2 return value placeholder
function upload() {
ncftpput -V -u $username -p $password $ftpadress /prog/ $1
return $?
}
function purge_retry() {
failed_files=$(ls -1 -rt ${retrydir}/*)
if [ -z $failed_files ]; then
return
fi
while read line
do
#upload ${retrydir}/$line
upload $line
if [ $? != 0 ]; then
# upload failed we exit
exit
fi
logmsg File $line Uploaded from ${retrydir}
mv $line $folderB
logmsg File $line Copyed from ${retrydir}
done <<< "$failed_files"
}
# first check out 'queue' directory
purge_retry
# if called from a cron task we are done
if [ $purge_only ]; then
rm -f $PIDFILE
exit 0
fi
# look in incoming dir
new_files=$(ls -1 -rt ${folderA}/*)
while read line
do
# launch upload
if [ Z$line == 'Z' ]; then
break
fi
#upload ${folderA}/$line
upload $line
if [ $? == 0 ]; then
logmsg File $line Uploaded from ${folderA}
else
# upload failed we cp to retry folder
echo upload failed
cp $line $retrydir
fi
# don't care upload successfull or failed, we ALWAYS move the file to folderB
mv $line $folderB
logmsg File $line Copyed from ${folderA}
done <<< "$new_files"
# clean exit
rm -f $PIDFILE
exit 0
below is content of ftpmon.path:
[Unit]
Description= Triggers the service that logs changes.
Documentation= man:systemd.path
[Path]
# Enter the path to monitor (/u02/dump)
PathModified=/u02/dump/
[Install]
WantedBy=multi-user.target
below is content of ftpmon.service:
[Unit]
Description= Starts File Upload monitoring
Documentation= man:systemd.service
[Service]
Type=oneshot
#Set here the user that ftpmxml.sh will run as
User=root
#Set the exact path to the script
ExecStart=/u02/ftpxml/ftpxml.sh
[Install]
WantedBy=default.target
Thanks in advance, hope any experts can give me some suggestion.
As you remove the successfull transfered files from A you can leave files with transfer errors in A. So I am dealing only with files in one folder.
List your files by creation time with
find -type f -maxdepth 1 -print0 | xargs -r0 stat -c %y\ %n | sort
if you want hidden files to be included or - if not -
find -type f -maxdepth 0 -print0 | xargs -r0 stat -c %y\ %n | sort
You'll get something like
2016-02-19 18:53:41.000000000 ./.dockerenv
2016-02-19 18:53:41.000000000 ./.dockerinit
2016-02-19 18:56:09.000000000 ./versions.txt
2016-02-19 19:01:44.000000000 ./test.sh
Now cut the filenames (or use xargs -r0 stat -c %n if it does not matter that the files are order by name instead the timestamp) and
do the transfer
check the success
move successfully transfered files to B
As you stated above, there are situations where newly stored files are not successfully transfered. This may be if the file is written further after you started the transfer. So filter the timestamp to be at least some time old. Add -mmin -1 to the find statement for "at least one minute old"
find -type f -maxdepth 0 -mmin -1 -print0 | xargs -r0 stat -c %n | sort
If you don't want to use a minute file age you'll have to check if the file is still open: lsof | grep ./testfile but this may have issues if you have tmpfs in your file system.
lsof | grep ./testfile
lsof: WARNING: can't stat() tmpfs file system /var/lib/docker/containers/8596cd310292a54652c7f50d7315c8390703b4816442146b340946779a72a40c/shm
Output information may be incomplete.
lsof: WARNING: can't stat() proc file system /run/docker/netns/fb9323486c44
Output information may be incomplete.
So add %s to the stats statement to check the file size twice within some seconds and if it's constant the file may be written complete. May, as the write process may be stalled.
I am working on a horribly old machine without logrorate.
[ Actually it has busybox 0.6, which is 'void of form' for most purposes. ]
I have openvpn running and I'd like to be able to see what it's been up to. The openvpn I'm using can output progress info to stdout or to a named log file.
I tried and failed to find a way to stop it using one log file and start it on another. Maybe some SIGUSR or something will make it close and re-open the output file, but I can't find it.
So I wrote a script which reads from stdin, and directs output to a rotating log file.
So now all I need to do is pipe the output from openvpn to it.
Job done.
Except that if I kill openvpn, the script which is processing its output just runs forever. There's nothing more it can do, so I'd like it to die automatically.
Is there any way to trap the situation in the script "EOF on STDIN" or something using "find the process ID which is feeding my stdin", or whatever?
I see that this resembles the question
"Tee does not exit after pipeline it's on has finished"
but it's not quite that in that I have no control over the behaviour of openvpn ( save that I can kill it ). I do have control over the script that receives the output of openvpn, but can't work out how to detect the death of openvpn, or the pipe from it to me.
My upper-level script is roughly:
vpn_command="openvpn --writepid ${sole_vpn_pid_file} \
--config /etc/openvpn/openvpn.conf \
--remote ${VPN_HOST} ${VPN_PORT} "
# collapse sequences of multiple spaces to one space
vpn_command_tight=$(echo -e ${vpn_command}) # must not quote the parameter
# We pass the pid file over explicitly in case we ever want to use multiple VPNs.
( ./${launchAndWaitScriptFullName} "${vpn_command_tight}" "${sole_vpn_pid_file}" 2>&1 | \
./vpn-log-rotate.sh 10000 /var/log/openvpn/openvpn.log ) &
if I kill the openvpn process, the "vpn-log-rotate.sh" one stays running.
that is:
#!/bin/sh
# #file vpn-log-rotate.sh
#
# #brief rotates stdin out to 2 levels of log files
#
# #param linesPerFile Number of lines to be placed in each log file.
# #param logFile Name of the primary log file.
#
# Archives the last log files on entry to .last files, then starts clean.
#
# #FIXME DGC 28-Nov-2014
# Note that this script does not die if the previous stage of the pipeline dies.
# It is possible that use of "trap SIGPIPE" or similar might fix that.
#
# #copyright Copyright Dexdyne Ltd 2014. All rights reserved.
#
# #author DGC
linesPerFile="$1"
logFile="$2"
# The location of this script as an absolute path. ( e.g. /home/Scripts )
scriptHomePathAndDirName="$(dirname "$(readlink -f $0)")"
# The name of this script
scriptName="$( basename $0 )"
. ${scriptHomePathAndDirName}/vpn-common.inc.sh
# Includes /sbin/script_start.inc.sh
# Reads config file
# Sets up vpn_temp_directory
# Sets up functions to obtain process id, and check if process is running.
# includes vpn-script-macros
# Remember our PID, to make it easier for a supervisor script to locate and kill us.
echo $$ > ${vpn_log_rotate_pid_file}
onExit()
{
echo "vpn-log-rotate.sh is exiting now"
rm -f ${vpn_log_rotate_pid_file}
}
trap "( onExit )" EXIT
logFileRotate1="${logFile}.1"
# Currently remember the 2 previous logs, in a rather knife-and-fork manner.
logFileMinus1="${logfile}.minus1"
logFileMinus2="${logfile}.minus2"
logFileRotate1Minus1="${logFileRotate1}.minus1"
logFileRotate1Minus2="${logFileRotate1}.minus2"
# The primary log file exist, rename it to be the rotated version.
rotateLogs()
{
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileRotate1}"
fi
}
# The log files exist, rename them to be the archived copies.
archiveLogs()
{
if [ -f "${logFileMinus1}" ]
then
mv -f "${logFileMinus1}" "${logFileMinus2}"
fi
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileMinus1}"
fi
if [ -f "${logFileRotate1Minus1}" ]
then
mv -f "${logFileRotate1Minus1}" "${logFileRotate1Minus2}"
fi
if [ -f "${logFileRotate1}" ]
then
mv -f "${logFileRotate1}" "${logFileRotate1Minus1}"
fi
}
archiveLogs
rm -f "${LogFile}"
rm -f "${logFileRotate1}"
while true
do
lines=0
while [ ${lines} -lt ${linesPerFile} ]
do
read line
lines=$(( ${lines} + 1 ))
#echo $lines
echo ${line} >> ${logFile}
done
mv -f "${logFile}" "${logFileRotate1}"
done
exit_0
Change this:
read line
to this:
read line || exit
so that if read-ing fails (because you've reached EOF), you exit.
Better yet, change it to this:
IFS= read -r line || exit
so that you don't discard leading whitespace, and don't treat backslashes as special.
And while you're at it, be sure to change this:
echo ${line} >> ${logFile}
to this:
printf %s "$line" >> "$logFile"
so that you don't run into problems if $line has a leading -, or contains * or ?, or whatnot.