bash: redirected file seen by script as 'does not exist ' - bash

I want to check if there are any errors with the last command, hence redirecting stderr to a file and checking the file for "error" string.(Only one possible error in this case.)
My script looks like below:
#aquire lock
rm -f /some/path/err.out
MyProgramme 2>/some/path/err.out &
if grep -i "error" /some/path/err.out ; then
echo "ERROR while running MyProgramme, check /some/path/err.out for error(s)"
#release lock
exit 1
fi
'if' condition is giving error 'No such file or directory' on err.out, however I can see the file exists.
Did I miss anything ?.. Any help is appreciated. Thanks!
PS: I couldn't check the exit code using $? as it is running in background.

In addition to the file possibly not existing when you call grep, you only call grep once, and it only sees whatever data is currently in the file. grep will not continue reading from the file when it reaches the end, waiting for MyProgramme to complete. Instead, I would recommend using a named pipe as the input to grep. This will cause grep to continue reading from the pipe until MyProgramme does, in fact, complete.
#aquire lock
rm -f /some/path/err.out
p=/some/path/err.out
mkfifo "$p"
MyProgramme 2> "$p" &
if grep -i "error" "$p" ; then
echo "ERROR while running MyProgramme, check /some/path/err.out for error(s)"
#release lock
exit
fi

When you start MyProgramme in the background, it's possible that grep executes before MyProgramme could write (and thus create) to the file /some/path/err.out. That's why even though the file exists later when you check it yourself, grep couldn't find it.
You can wait until the background job completes using wait before inspecting the file using grep.

Related

On what occasion does 'tee' deletes the file it was writing on?

bash4.2, centos
the script
#!/bin/bash
LOG_FILE=$homedir/logs/result.log
exec 3>&1
exec > >(tee -a ${LOG_FILE}) 2>&1
echo
end_shell_number=10
for script in `seq -f "%02g_*.sh" 0 $end_shell_number`; do
if ! bash $homedir/$script; then
printf 'Script "%s" failed, terminating...\n' "$script" >&2
exit 1
fi
done
It basically runs through sub-shells number 00 to 10 and logs everything to a LOG_FILE while also displaying on stdout.
I was watching the log getting stacked with tail -F ./logs/result.log,
and it was working nicely until the log file suddenly got removed.
The sub-shells does nothing related to file descriptors nor the log file. They remotely restart tomcats via ssh commands.
Question :
tee was writing on a log file successfully until the file gets erased and logging stops from then on.
Is there a filesize limit or timeout in tee? Is there any known behavior of tee that it deletes a file?
On what occasion does 'tee' deletes the file it was writing on?
tee does not delete nor truncate the file once it has started writing.
Is there a filesize limit or timeout in tee?
No.
Is there any known behavior of tee that it deletes a file?
No.
Note that file can be removed, but the process (tee) still will wrote the open file descriptor, but the file will not be accessible (see man 3 unlink).

Unable to exit line in bash script

I am writing a script to start an application, grep for the word "server startup", exit and then execute the next command. But it would not exit and execute next cmd after condition is met. Any help?
#!/bin/bash
application start; tail -f /application/log/file/name | \
while read line ; do
echo "$line" | grep "Server startup"
if [ $? = 0 ]
then
echo "application started...!"
fi
done
Don't Use Tail's Follow Flag
Tail's follow flag (e.g. -f) will not exit, and will continue to follow the file until it receives an appropriate signal or encounters an error condition. You will need to find a different approach to tracking data at the end of your file, such as watch, logwatch, or periodic log rotation using logrotate. The best tool to use will depend a lot on the format and frequency of your log data.

Creating lock files shell

I'm currently creating a lock folder which is created when my script runs, I also move files into sub folders here for processing. When the script ends a TRAP is called which removes the lock folder and contents, all of which is working fine. We had an issue the other day when someone pulled the power from one of the servers so my TRAP was never called so when re-booted the lock folder was still there which meant my scripts couldn't re-start until they were manually removed. What's the best way of checking if the script is already running ? I currently have this approach using process id's:
if ! mkdir $LOCK_DIR 2>/dev/null; then # Try to create the lock dir. This should pass successfully first run.
# If the lock dir exists
pid=$(cat $LOCK_DIR/pid.txt)
if [[ $(ps -ef | awk '{print $2}' | grep $pid | grep -v grep | wc -l) == 1 ]]; then
echo "Script is already running"
exit 1
else
echo "It looks like the previous script was killed. Restarting process."
# Do some cleanup here before removing dir and re-starting process.
fi
fi
# Create a file in the lock dir containing the pid. Echo the current process id into the file.
touch $LOCK_DIR/pid.txt
echo $$ > $LOCK_DIR/pid.txt
# Rest of script below
Checking /proc/ and cmdline is a good call - especially as at the moment you are simply checking that there isn't a process with the process id and not if the process is actually your script.
You could still do this with your ps command - which would offer some form of platform agnosticism.
COMMAND=$(ps -o comm= -p $pid)
if [[ $COMMAND == my_process ]]
then
.....
Note the command line arguments to ps limit it to command only with no header.
Many systems nowadays use tmpfs for directories like /tmp. These directories will therefore always be cleared after a reboot.
If using your pid file, note you can easily see the command
running under that pid in /proc/$pid/cmdline and /proc/$pid/exe.

Shell script: Ensure that script isn't executed if already running [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
I've set up a cronjob to backup my folders properly which I am quite proud of. However I've found out, by looking at the results from the backups, that my backup script has been called more than once by Crontab, resulting in multiple backups running at the same time.
Is there any way I can ensure that a certain shell script to not run if the very same script already is executing?
A solution without race condition or early exit problems is to use a lock file. The flock utility handles this very well and can be used like this:
flock -n /var/run/your.lockfile -c /your/script
It will return immediately with a non 0 status if the script is already running.
The usual and simple way to do this is to put something like:
if [[ -f /tmp/myscript.running ]] ; then
exit
fi
touch /tmp/myscript.running
at the top of you script and
rm -f /tmp/myscript.running
at the end, and in trap functions in case it doesn't reach the end.
This still has a few potential problems (such as a race condition at the top) but will do for the vast majority of cases.
A good way without a lock file:
ps | grep $0 | grep -v grep > /var/tmp/$0.pid
pids=$(cat /var/tmp/$0.pid | cut -d ' ' -f 1)
for pid in $pids
do
if [ $pid -ne $$ ]; then
logprint " $0 is already running. Exiting"
exit 7
fi
done
rm -f /var/tmp/$0.pid
This does it without a lock file, which is cool.
ps into a temp file, scrape the first field (pid #) and look for ourselves. If we find a different one, then somebody's already running. The grep $0 is to shorten the list to just those instances of this program, and the grep -v grep gets rid of the line that is the grep itself:)
You can use a tmp file.
Named it tmpCronBkp.a, tmpCronBkp.b, tmpCronBkp.c... etc. Releated of yout backup script.
Create it on script start and delete it at the end...
In a while, check if file exist or not and what file exist.
Have you tried this way?

Shell fragment to make sure only one instance a shell script runs at any given time [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
At a previous workplace we used to have a highly-refined bash function called run-only-once that we could write into any long-running shell script, and then call it at the start of the script, which would check to see whether the script was already running as another process, and if so exit with a notification to STDOUT.
Does anyone have a function/fragment like this they could share?
Our old function (which I no longer have) would check for a PID file (in scriptname.$$ format) in /var/run, and use then either exit or simply continue. In the case where a PID file existed, it would do some checks to make sure that the process was still active. It also had a few options for controlling whether a notification was output at all.
From memory, our function only worked in bash. Bonus points for a /bin/sh version.
Put this at the start of the script
SCRIPTNAME=`basename $0`
PIDFILE=/var/run/${SCRIPTNAME}.pid
if [ -f ${PIDFILE} ]; then
#verify if the process is actually still running under this pid
OLDPID=`cat ${PIDFILE}`
RESULT=`ps -ef | grep ${OLDPID} | grep ${SCRIPTNAME}`
if [ -n "${RESULT}" ]; then
echo "Script already running! Exiting"
exit 255
fi
fi
#grab pid of this process and update the pid file with it
PID=`ps -ef | grep ${SCRIPTNAME} | head -n1 | awk ' {print $2;} '`
echo ${PID} > ${PIDFILE}
and at the end
if [ -f ${PIDFILE} ]; then
rm ${PIDFILE}
fi
This first of all checks for the existence of the pid file and exits if it's present. If so then it confirms that a process under this script name with the old pid is running and exits if so. If not then it carries on and updates the script with the new pid file. The bit at the end checks for the existence of the pid file and deletes it, so the script can run next time.
Check permissions on /var/run are OK for your script though, otherwise create the PID file in another directory. Same directory as the script runs in would be fine.
There are two ways to do atomic locks from the shell. The simplest and most portable is mkdir. Creation of a directory will always fail if a file/directory by that name already exists, so simply use a "lock directory" instead of "lock file".
mkdir "$LOCK" || { echo "Script already running" ; exit 1 ; }
The other method is the noclobber option, set using set -C. This option forces the shell to open files for output redirection with the O_EXCL flag. Use it like this:
set -C
> "$LOCK" || { echo "Script already running" ; exit 1 ; }
set +C
Rather than redirecting a null command, you might prefer to echo the current PID or something else useful to be stored in the file.
It's worth keeping in mind that some ancient historic shells have broken implementations of the noclobber option -C which do not use O_EXCL but instead buggy non-atomic checks for file existence. Probably not an issue in the 21st century, but you should be aware just in case.
It is more reliable to use the lockfile utility in Linux.
From the man page:
...
lockfile important.lock
...
access_"important"_to_your_hearts_content
...
rm -f important.lock
...
You can specify retries and wait times. This was specifically created for this purpose so it takes care of race conditions

Resources