How do I prevent a continuous loop from ending - bash

I have a simple bash script which calls a php script every 10 minutes thats performs some maintenance. Every once in a while this php script terminates while it's running and when this happens the bash script exits.
I'd like to make it so the bash script keeps on looping even if the php script falters. Can anyone point me in the right direction? I've been searching for a while but I can't seem to find the answer, maybe I'm not using the right search terms.
#!/bin/sh
set -e
while :
do
/usr/bin/php /path/to/maintenance/script.php
sleep 600
done

Rjz's comment is correct, you should use cron. To do that, run crontab -e and add this line:
*/10 * * * * /usr/bin/php /path/to/maintenance/script.php
If it's set up properly, cron will email you any output (including error messages).

The:
set -e
line sets the shell's "exit on error" flag, which tells it that if a program it runs exits with a non-zero status, the shell should also exit:
set -e
false
echo if this prints, your shell is not honoring "set -e"
There are exceptions for programs whose status is being tested, of course, so that:
set -e
if prog; then
echo program succeeded
else
echo program failed
fi
echo this will still print
will work correctly (one or or the other echo will occur, and then the last one will as well).
Back in the Dim Time, when /bin/sh was non-POSIX and was written in Bournegol, there was a bug in some versions of sh that broke || expressions:
set -e
false || true
echo if this prints, your shell is OK
(The logic bug applied to && expressions internally as well, but was harmless there, since false && anything is itself false which means the whole expression fails anyway!) Ever since then, I've been wary of "-e".

Related

Difference of behavior between “set -e + source” and “bash -ec + source”

Context
While setting up a basic unit testing system, I ran into an odd issue.
My goal was to make sure all individual test scripts:
were run with set -e to detect errors, without needing to explicitly set this in each file;
knew right away about the functions to be tested (stored in another file) without needing to explicitly source those in each test file.
Observations
Let this be a dummy test file called to-be-sourced.sh. We want to be able to know if a command in it fails:
# Failing command!
false
# Last command is OK:
true
And here is a dummy test runner, which must run the test file:
#! /usr/bin/env bash
if (
set -e
. to-be-sourced.sh
)
then
echo 'Via set: =0'
else
echo 'Via set: ≠0'
fi
This yields Via set: =0, meaning that the runner is happy. But it should not!
My hypothesis was:
set -e is not propagated within . sourcing, and as explained in the help for . and source, the exit status is the one of the last command.
But then I came up with a workaround that works, but also relies on .:
if bash -ec '. "$0"' to-be-sourced.sh
then
echo 'Via bash: =0'
else
echo 'Via bash: ≠0'
fi
This yields ≠0 whenever a command in the test file fails, regardless of whether that command was the last one of the test file. As a bonus, I can toss any number of . a/library/file.sh within the -c command, so each test file can use all of my functions out of the box. I should therefore be happy, but:
Why does this work, considering that the -c command also relies on . to load the test file (and I thought bash’s -e was equivalent to set’s -e)?
I also thought about using bash’s --init-file, but it appeared to be skipped when a script is passed as a parameter. And anyway my question is not so much about what I was trying to achieve, but rather about the observed difference of behavior.
Edit
Sounds like if is tempering with the way set -e is handled.
This halts execution, indicating failure:
. to-be-sourced.sh
… while this goes into the then (not the else), indicating success:
if . to-be-sourced.sh
then
echo =0
else
echo ≠0
fi
(This may not be precisely correct, but I think it captures what happens.)
In your first example, set -e sets the option in a command that is lexically in the scope of an if statement, and so even though it is set, it is ignored. (You can confirm it is set by running echo $- inside to-be-sourced.sh. Note, too, that . itself has a 0 exit status, which you can confirm by replacing true with an echo statement; it's not that it fails but the failure is ignored.)
In your second example, -e sets the errexit option in a new process, which knows nothing about the if statement and therefore it is not ignored.

how to stop the cronjob after the previous success

am writing a crontab script, which will run on each Saturday for every 15minutes. The idea is to validate an external api status =SUCCESS or not. If its success, then the cronjob for the day should not trigger any more.
Right now am trying with recursion, but I dont think so that is a best solution.
Is there any other solution to achieve this? am using Shell script to invoke api.
Here is the existing snippet:
Cronjob:
*/15 * * * 6 validate.sh
script:
status='curl -X GET "api"'
if [[ $status == "SUCCEEDED" ]];then
trigger email
else sleep 180
./validate.sh
fi
Add another cron job so it removes the flag file on Friday evening, before the other job starts running:
59 23 * * 5 rm .succeeded.txt
Then change your script so it aborts if this file exists, and creates it when it succeeds.
#!/bin/bash
test -e .succeeded.txt && exit
if [[ $(curl -X GET "api") == "SUCCEEDED" ]];then
trigger email
touch .succeeded.txt
fi
I tried to fix other errors in your script, too, but I had to guess many things. This assumes "SUCCEEDED" is the sole output from curl when the GET works.
Putting the command in a variable is a useless complication which makes your script longer and (very slightly) slower, but in addition, it creates problems of its own when the command contains embedded quotes; see e.g. http://mywiki.wooledge.org/BashFAQ/050
... But of course, presumably you wanted to actually run the command. Your attempt would merely check whether the string in the variable was equal to "SUCEEDED" which of course it would never be.
Another problem was that you were spawning multiple validate.sh jobs, each of which would recurse and retry. You want one or the other, not both. I went with keeping your schedule and just trying once in each job.

Bash control flow using || on function, with set -e

If I put set -e in a Bash script, the script will exit on future errors. I'm confused about how this works with functions. Consider the following, which will only print one to standard out:
set -e # Exit on error
fun(){
echo one
non_existing_command
echo two
}
fun
Clearly, the non_existing_command is an error and so the script exits before the second echo. Usually one can use the or operator || to run another command if and only if the first command fails. That is, I would suspect the following to print out both one and three, but not two:
set -e # Exit on error
fun(){
echo one
non_existing_command
echo two
}
fun || echo three
What I get however is one and two. That is, the || operator prevents the exit (as it should) but it chooses to continue with the function body and disregard the right-hand command.
Any explanation?
It appears to be documented in the set builtin command
If a compound command or shell function executes in a context where -e is being ignored [such as on the left-hand of a ||], none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status.
Emphasis and comment are mine.
Also, if you try to set -e within the function, don't bother: the next sentence:
If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.

Check processes run by cronjob to avoid multiple execution

How do I avoid cronjob from executing multiple times on the same command? I had tried to look around and try to check and kill in processes but it doesn't work with the below code. With the below code it keeps entering into else condition where it suppose to be "running". Any idea which part I did it wrongly?
#!/bin/sh
devPath=`ps aux | grep "[i]mport_shell_script"` | xargs
if [ ! -z "$devPath" -a "$devPath" != " " ]; then
echo "running"
exit
else
while true
do
sudo /usr/bin/php /var/www/html/xxx/import_from_datafile.php /dev/null 2>&1
sleep 5
done
fi
exit
cronjob:
*/2 * * * * root /bin/sh /var/www/html/xxx/import_shell_script.sh /dev/null 2>&1
I don't see the point to add a cron job which then starts a loop that runs a job. Either use cron to run the job every minute or use a daemon script to make sure your service is started and is kept running.
To check whether your script is already running, you can use a lock directory (unless your daemon framework already does that for you):
LOCK=/tmp/script.lock # You may want a better name here
mkdir $LOCK || exit 1 # Exit with error if script is already running
trap "rmdir $LOCK" EXIT # Remove the lock when the script terminates
...normal code...
If your OS supports it, then /var/lock/script might be a better path.
Your next question is probably how to write a daemon. To answer that, I need to know what kind of Linux you're using and whether you have things like systemd, daemonize, etc.
check the presence of a file at the beginning of your script ( for example /tmp/runonce-import_shell_script ). If it exists, that means the same script is already running (or the previous one halted with an error).
You can also add a timestamp in that file so you can check since when the script was running (and maybe decide to run it again after 24h even if the file is present)

Can a bash script tell if it's being run via cron?

Not having much luck Googling this question and I thought about posting it on SF, but it actually seems like a development question. If not, please feel free to migrate.
So, I have a script that runs via cron every morning at about 3 am. I also run the same scripts manually sometimes. The problem is that every time I run my script manually and it fails, it sends me an e-mail; even though I can look at the output and view the error in the console.
Is there a way for the bash script to tell that it's being run through cron (perhaps by using whoami) and only send the e-mail if so? I'd love to stop receiving emails when I'm doing my testing...
you can try "tty" to see if it's run by a terminal or not. that won't tell you that it's specifically run by cron, but you can tell if its "not a user as a prompt".
you can also get your parent-pid and follow it up the tree to look for cron, though that's a little heavy-handed.
I had a similar issue. I solved it with checking if stdout was a TTY. This is a check to see if you script runs in interactive mode:
if [ -t 1 ] ; then
echo "interacive mode";
else
#send mail
fi
I got this from: How to detect if my shell script is running through a pipe?
The -t test return true if file descriptor is open and refers to a terminal. '1' is stdout.
Here's two different options for you:
Take the emailing out of your script/program and let cron handle it. If you set the MAILTO variable in your crontab, cron will send anything printed out to that email address. eg:
MAILTO=youremail#example.com
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
Set an environment variable in your crontab that is used to determine if running under cron. eg:
THIS_IS_CRON=1
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
and in your script something like
if [ -n "$THIS_IS_CRON" ]; then echo "I'm running in cron"; else echo "I'm not running in cron"; fi
Why not have a command line argument that is -t for testing or -c for cron.
Or better yet:
-e=email#address.com
If it's not specified, don't send an email.
I know the question is old, but I just came across the same problem. This was my solution:
CRON=$(pstree -s $$ | grep -q cron && echo true || echo false)
then test with
if $CRON
then
echo "Being run by cron"
else
echo "Not being run by cron"
fi
same idea as the one that #eruciform mentioned - follows your PID up the process tree checking for cron.
Note: This solution only works specifically for cron, unlike some of the other solutions, which work anytime the script is being run non-interactively.
What works for me is to check $TERM. Under cron it's "dumb" but under a shell it's something else. Use the set command in your terminal, then in a cron-script and check it out
if [ "dumb" == "$TERM" ]
then
echo "cron"
else
echo "term"
fi
I'd like to suggest a new answer to this highly-voted question. This works only on systemd systems with loginctl (e.g. Ubuntu 14.10+, RHEL/CentOS 7+) but is able to give a much more authoritative answer than previously presented solutions.
service=$(loginctl --property=Service show-session $(</proc/self/sessionid))
if [[ ${service#*=} == 'crond' ]]; then
echo "running in cron"
fi
To summarize: when used with systemd, crond (like sshd and others) creates a new session when it starts a job for a user. This session has an ID that is unique for the entire uptime of the machine. Each session has some properties, one of which is the name of the service that started it. loginctl can tell us the value of this property, which will be "crond" if and only if the session was actually started by crond.
Advantages over using environment variables:
No need to modify cron entries to add special invocations or environment variables
No possibility of an intermediate process modifying environment variables to create a false positive or false negative
Advantages over testing for tty:
No false positives in pipelines, startup scripts, etc
Advantages over checking the process tree:
No false positives from processes that also have crond in their name
No false negatives if the script is disowned
Many of the commands used in prior posts are not available on every system (pstree, loginctl, tty). This was the only thing that worked for me on a ten years old BusyBox/OpenWrt router that I'm currently using as a blacklist DNS server. It runs a script with an auto-update feature. Running from crontab, it sends an email out.
[ -z "$TERM" ] || [ "$TERM" = "dumb" ] && echo 'Crontab' || echo 'Interactive'
In an interactive shell the $TERM-variable returns the value vt102 for me. I included the check for "dumb" since #edoceo mentioned it worked for him. I didn't use '==' since it's not completely portable.
I also liked the idea from Tal, but also see the risk of having undefined returns. I ended up with a slightly modified version, which seems to work very smooth in my opinion:
CRON="$( pstree -s $$ | grep -c cron )"
So you can check for $CRON being 1 or 0 at any time.

Resources