send bash stderr to logfile, but only if an error exists - bash

I am using the following code to send stderr to a file.
.script >2 "errorlog.$(date)"
The problem is that a blank log file is created every time I run the script, even if an error doesn't exist. I have looked online and in a few books as well, and can't figure out how to create a log file only if errors exist.

Output redirection opens the file before the script is run, so there is no way to tell if the file will receive any output. What you can do, however, is immediately delete the file if it winds up being empty:
logfile="errorlog.$(date)"
# Note your typo; it's 2>, not >2
script 2> "$logfile"; [ -s "$logfile" ] || rm -f "$logfile"
I use -f just in case, as -s can fail if $logfile does not exist, not just if it's empty. I use ; to separate the commands because whether or not $logfile contains anything does not depend on whether or not script succeeds.
You can wrap this up in a function to make it easier to use.
save_log () {
logfile=${1:-errorlog.$(date)}
cat - > "$logfile"
[ -s "$logfile" ] || rm -f "$logfile"
}
script 2> >( save_log )
script 2> >( save_log my_logfile.txt )
Not quite as simple as redirecting to a file, and depends on a non-standard feature (process substitution), but not too bad, either.

Related

redirect screen output to file

I'm trying to redirect the screen output to a log file but I don't seem to be getting this right, see the code below:
DT=$(date +%Y-%m-%d-%H-%m-%s)
echo $DT > log_copy_$DT.txt
cat dirfiles.txt | while read f ; do
dest=/mydir
scp "${f}" $dest >> log_copy_$DT.txt 2>&1
done
All I get is a file with the date, but not the screen results (I need to see if the files copied correctly).
So, basically I'm appending the results of the scp command into the log and doing the 2>&1 so that the standard output screen is written to the file but doesn't seem to work.
I need to run this on a crontab so I'm not sure if the screen contents will even go to the log once I set it up.
Well, after investigating, it seems scp can't really write standard screen output to a file, it kinda cancels the standard out as it shows % progress, so I ended up doing this:
scp "${f}" $dest && echo $f successfully copied! >> log_copy_$DT.txt
basically, it it can copy the file over, then it writes a message saying it was OK.

Replace file only if not being accessed in bash

My requirement is to replace file only when it is not being accessed. I have following snippet:
if [ -f file ]
then
while true
do
if [ -n "$(fuser "file")" ]
then
echo "file is in use..."
else
echo "file is free..."
break
fi
done
fi
{
flock -x 3
mv newfile file
} 3>file
But I have a doubt that I am not handling concurrency properly. Please give some insights and possible way to achieve this.
Thanks.
My requirement is to replace file only when it is not being accessed.
Getting requirements right can be hard. In case your actual requirement is the following, you can boil down the whole script to just one command.
My guess on the actual requirement (not as strict as the original):
Replace file without disturbing any programs reading/writing file.
If this is the case, you can use a very neat behavior: In Unix-like systems file descriptors always point to the file (not path) for which they where opened. You can move or even delete the corresponding path. See also How do the UNIX commands mv and rm work with open files?.
Example:
Open a terminal and enter
i=1; while true; do echo $((i++)); sleep 1; done > file &
tail -f file
The first command writes output to file and runs in the background. The second command reads the file and continues to print its changing content.
Open another terminal and move or delete file, for instance with
mv file file2
echo overwritten > otherFile
mv otherFile file2
rm file2
echo overwritten > file
echo overwritten > file2
While executing these commands have a look at the output of tail -f in the first terminal – it won't be affected by any of these commands. You will never see overwritten.
Solution For New Requirement:
Because of this behavior you can replace the whole script with just one mv command:
mv newfile file
Consider lsof.
mvWhenClear() {
while [[ -f "$1" ]] && lsof "$1"
do sleep $delay
done
mv "$1" "$2" # still allows race condition
}

shell script to remove a file if it already exist

I am working on some stuff where I am storing data in a file.
But each time I run the script it gets appended to the previous file.
I want help on how I can remove the file if it already exists.
Don't bother checking if the file exists, just try to remove it.
rm -f /p/a/t/h
# or
rm /p/a/t/h 2> /dev/null
Note that the second command will fail (return a non-zero exit status) if the file did not exist, but the first will succeed owing to the -f (short for --force) option. Depending on the situation, this may be an important detail.
But more likely, if you are appending to the file it is because your script is using >> to redirect something into the file. Just replace >> with >. It's hard to say since you've provided no code.
Note that you can do something like test -f /p/a/t/h && rm /p/a/t/h, but doing so is completely pointless. It is quite possible that the test will return true but the /p/a/t/h will fail to exist before you try to remove it, or worse the test will fail and the /p/a/t/h will be created before you execute the next command which expects it to not exist. Attempting this is a classic race condition. Don't do it.
Another one line command I used is:
[ -e file ] && rm file
You can use this:
#!/bin/bash
file="file_you_want_to_delete"
if [ -f "$file" ] ; then
rm "$file"
fi
If you want to ignore the step to check if file exists or not, then you can use a fairly easy command, which will delete the file if exists and does not throw an error if it is non-existing.
rm -f xyz.csv
A one liner shell script to remove a file if it already exist (based on Jindra Helcl's answer):
[ -f file ] && rm file
or with a variable:
#!/bin/bash
file="/path/to/file.ext"
[ -f $file ] && rm $file
Something like this would work
#!/bin/sh
if [ -fe FILE ]
then
rm FILE
fi
-f checks if it's a regular file
-e checks if the file exist
Introduction to if for more information
EDIT : -e used with -f is redundant, fo using -f alone should work too
if [ $( ls <file> ) ]; then rm <file>; fi
Also, if you redirect your output with > instead of >> it will overwrite the previous file
So in my case I wanted to remove a FIFO file before I create it again, so this worked for me:
#!/bin/bash
file="/tmp/test"
rm -rf $file | true
mkfifo $file
| true will continue the script even if file is not found.

how can a bash script which is in a pipe detect that it's data source has died.?

I am working on a horribly old machine without logrorate.
[ Actually it has busybox 0.6, which is 'void of form' for most purposes. ]
I have openvpn running and I'd like to be able to see what it's been up to. The openvpn I'm using can output progress info to stdout or to a named log file.
I tried and failed to find a way to stop it using one log file and start it on another. Maybe some SIGUSR or something will make it close and re-open the output file, but I can't find it.
So I wrote a script which reads from stdin, and directs output to a rotating log file.
So now all I need to do is pipe the output from openvpn to it.
Job done.
Except that if I kill openvpn, the script which is processing its output just runs forever. There's nothing more it can do, so I'd like it to die automatically.
Is there any way to trap the situation in the script "EOF on STDIN" or something using "find the process ID which is feeding my stdin", or whatever?
I see that this resembles the question
"Tee does not exit after pipeline it's on has finished"
but it's not quite that in that I have no control over the behaviour of openvpn ( save that I can kill it ). I do have control over the script that receives the output of openvpn, but can't work out how to detect the death of openvpn, or the pipe from it to me.
My upper-level script is roughly:
vpn_command="openvpn --writepid ${sole_vpn_pid_file} \
--config /etc/openvpn/openvpn.conf \
--remote ${VPN_HOST} ${VPN_PORT} "
# collapse sequences of multiple spaces to one space
vpn_command_tight=$(echo -e ${vpn_command}) # must not quote the parameter
# We pass the pid file over explicitly in case we ever want to use multiple VPNs.
( ./${launchAndWaitScriptFullName} "${vpn_command_tight}" "${sole_vpn_pid_file}" 2>&1 | \
./vpn-log-rotate.sh 10000 /var/log/openvpn/openvpn.log ) &
if I kill the openvpn process, the "vpn-log-rotate.sh" one stays running.
that is:
#!/bin/sh
# #file vpn-log-rotate.sh
#
# #brief rotates stdin out to 2 levels of log files
#
# #param linesPerFile Number of lines to be placed in each log file.
# #param logFile Name of the primary log file.
#
# Archives the last log files on entry to .last files, then starts clean.
#
# #FIXME DGC 28-Nov-2014
# Note that this script does not die if the previous stage of the pipeline dies.
# It is possible that use of "trap SIGPIPE" or similar might fix that.
#
# #copyright Copyright Dexdyne Ltd 2014. All rights reserved.
#
# #author DGC
linesPerFile="$1"
logFile="$2"
# The location of this script as an absolute path. ( e.g. /home/Scripts )
scriptHomePathAndDirName="$(dirname "$(readlink -f $0)")"
# The name of this script
scriptName="$( basename $0 )"
. ${scriptHomePathAndDirName}/vpn-common.inc.sh
# Includes /sbin/script_start.inc.sh
# Reads config file
# Sets up vpn_temp_directory
# Sets up functions to obtain process id, and check if process is running.
# includes vpn-script-macros
# Remember our PID, to make it easier for a supervisor script to locate and kill us.
echo $$ > ${vpn_log_rotate_pid_file}
onExit()
{
echo "vpn-log-rotate.sh is exiting now"
rm -f ${vpn_log_rotate_pid_file}
}
trap "( onExit )" EXIT
logFileRotate1="${logFile}.1"
# Currently remember the 2 previous logs, in a rather knife-and-fork manner.
logFileMinus1="${logfile}.minus1"
logFileMinus2="${logfile}.minus2"
logFileRotate1Minus1="${logFileRotate1}.minus1"
logFileRotate1Minus2="${logFileRotate1}.minus2"
# The primary log file exist, rename it to be the rotated version.
rotateLogs()
{
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileRotate1}"
fi
}
# The log files exist, rename them to be the archived copies.
archiveLogs()
{
if [ -f "${logFileMinus1}" ]
then
mv -f "${logFileMinus1}" "${logFileMinus2}"
fi
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileMinus1}"
fi
if [ -f "${logFileRotate1Minus1}" ]
then
mv -f "${logFileRotate1Minus1}" "${logFileRotate1Minus2}"
fi
if [ -f "${logFileRotate1}" ]
then
mv -f "${logFileRotate1}" "${logFileRotate1Minus1}"
fi
}
archiveLogs
rm -f "${LogFile}"
rm -f "${logFileRotate1}"
while true
do
lines=0
while [ ${lines} -lt ${linesPerFile} ]
do
read line
lines=$(( ${lines} + 1 ))
#echo $lines
echo ${line} >> ${logFile}
done
mv -f "${logFile}" "${logFileRotate1}"
done
exit_0
Change this:
read line
to this:
read line || exit
so that if read-ing fails (because you've reached EOF), you exit.
Better yet, change it to this:
IFS= read -r line || exit
so that you don't discard leading whitespace, and don't treat backslashes as special.
And while you're at it, be sure to change this:
echo ${line} >> ${logFile}
to this:
printf %s "$line" >> "$logFile"
so that you don't run into problems if $line has a leading -, or contains * or ?, or whatnot.

Create a detailed self tracing log in bash

I know you can create a log of the output by typing in script nameOfLog.txt and exit in terminal before and after running the script, but I want to write it in the actual script so it creates a log automatically. There is a problem I'm having with the exec >>log_file 2>&1 line:
The code redirects the output to a log file and a user can no longer interact with it. How can I create a log where it just basically copies what is in the output?
And, is it possible to have it also automatically record the process of files that were copied? For example, if a file at /home/user/Deskop/file.sh was copied to /home/bckup, is it possible to have that printed in the log too or will I have to write that manually?
Is it also possible to record the amount of time it took to run the whole process and count the number of files and directories that were processed or am I going to have to write that manually too?
My future self appreciates all the help!
Here is my whole code:
#!/bin/bash
collect()
{
find "$directory" -name "*.sh" -print0 | xargs -0 cp -t ~/bckup #xargs handles files names with spaces. Also gives error of "cp: will not overwrite just-created" even if file didn't exist previously
}
echo "Starting log"
exec >>log_file 2>&1
timelimit=10
echo "Please enter the directory that you would like to collect.
If no input in 10 secs, default of /home will be selected"
read -t $timelimit directory
if [ ! -z "$directory" ] #if directory doesn't have a length of 0
then
echo -e "\nYou want to copy $directory." #-e is so the \n will work and it won't show up as part of the string
else
directory=/home/
echo "Time's up. Backup will be in $directory"
fi
if [ ! -d ~/bckup ]
then
echo "Directory does not exist, creating now"
mkdir ~/bckup
fi
collect
echo "Finished collecting"
exit 0
To answer the "how to just copy the output" question: use a program called tee and then a bit of exec magic explained here:
redirect COPY of stdout to log file from within bash script itself
Regarding the analytics (time needed, files accessed, etc) -- this is a bit harder. Some programs that can help you are time(1):
time - run programs and summarize system resource usage
and strace(1):
strace - trace system calls and signals
Check the man pages for more info. If you have control over the script it will be probably easier to do the logging yourself instead of parsing strace output.

Resources