bash file: cronjob vs manual, why are they different? - bash

I have a bash script that runs every five minutes. Among other things it runs php scripts reading on existing files, and at the end it sends an email. When run manually, it does the job. When the cronjob runs, it partially completes the task. The code below:
DIR="/somedir/"
php ${DIR}client.php $DIR
cat ${DIR}alert_list.txt | uniq | while read alert;
do
if [ -s ${DIR}alerts/$alert.txt ]; then
# send the email.
echo "Sending email for..."$alert >> ${DIR}email.txt
DETAILFILE="tools/"$alert
DETAILFILEP=${DETAILFILE}".txt"
php ${DIR}email.php $alert
fi
done
echo 'search completed.'
in 'cronjob mode' it never gets to the 'do' statement. In manual mode it does everything.
Any thoughts?
Thanks a lot!

I found the issue within the PHP scripts. Relative calls to files which are located in other paths get missed when it runs automatically. Apparently, it runs from somewhere else, so it could not progress because of the missing input files created by initial php script.
Thanks.

The difference between the manual run and the cron run in bash is that in case of a cron the .bash* files are not sourced initially, and hence it might happen some of the required settings (Eg: PATH) are different.
And also, (replying to your previous comment) The PWD in case of a cron is $HOME, so the needed files as you mentioned are not picked, whereas in case of manual run it picks from the path you run.
Hope this helps.

Related

How to run shell script within shell script with fixed arguments

I have a simple script that creates a loop around another script and directly gives the parameters and arguments to that script - here comes the loop into play since the script is supposed to run over several files. The way I wrote it it's currently not working so how should I attach these parameters? I'm fairly new to bash so any help will be appreciated a lot!
#!/bin/bash
SCRIPT_PATH="xx.sh"
for x in {001..031}; do
"$SCRIPT_PATH" /data/raw/"$x"_AE data/processed/"$x"_AE 5 --info
done
There may have a syntax issue in your script, the first path is starting with '/' (/data/raw/...) so it is absolute, but it is NOT the case of the second one data/processed/...; is is intentional?
Ensure there is NO directory/path issue (where is xx.sh located ?)
Ensure the user who launches the script has access permissions on /data directories and sub-directories
Let me know if it fixes your issue?

Bash file running fine manually but on cronjob stops

I've created a bash file that queries my database and then updates some tables.
When I run it manually everything goes smoothly but when I run it with a cronjob it runs the first query and then stops before it goes into a loop.
After looking into it on the net I found a few things that may be the issue but from my side everything looks in order.
So what I did:
Checked if #!/bin/bash is included in my bash at the start and it is.
Checked that the path is correct in the cronjob. My cronjob below
0-59/5 * * * * cd /path/path2/bashLocation/; ./bash.sh
The loop is in the format of
for ID in ${IDS//,/ }
do
...do something
done
This works fine tested manually. My IDS are in string format that why I split it with //,/.(Works fine)
I log all outputs in a log file but it doesn't show any error.
Has anyone encountered this issue before or has any ideas how to fix the issue?
If the command you are running in cron has percent signs ('%'), they need to be escaped with a backslash. I've been bitten by this. From the manpage: "Percent-signs (%) in the command, unless escaped with backslash () ..."
The $PATH variable may be different when run from cron. Try putting something like this at the beginning of your script: export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Try running bash explicitly, i.e. rather than ./bash.sh in crontab, try /bin/bash bash.sh
I don't know how helpful this may be to some people but I noticed when I printenv shell in my logs it printed that it was bin/sh even if I define it at the top of my script and run it as a bash file.
So what I did was changed all parts of my code that where not supported by shell and my conjob works fine.
So I assume that conjob does not support bash files. (Didn't find anything on the internet about this.)
Why it runs in /bin/sh I don't know.
Hope someone finds this helpful.

bash script rsync itself from remote host - how to?

I have multiple remote sites which run a bash script, initiated by cron (running VERY frequently -- 10 minutes or less), in which one of it's jobs is to sync a "scripts" directory. The idea is for me to be able to edit the scripts in one location (a server in a data center) rather than having to log into each remote site and doing any edits manually. The question is, what are the best options for syncing the script that is currently running the sync? (I hope that's clear).
I would imagine syncing a script that is currently running would be very bad. Does the following look feasible if I run it as the last statement of my script? pros? cons? Other options??
if [ -e ${newScriptPath} ]; then
mv ${newScriptPath} ${permanentPath}" | at "now + 1 minute"
fi
One problem I see is that it's possible that if I use "1 minute" (which is "at's" smallest increment), and the script ends, and cron initiates the next job before "at" replaces the script, it could try to replace it during the next run of the script....
Changing the script file during execution is indeed dangerous (see this previous answer), but there's a trick that (at least with the versions of bash I've tested with) forces bash to read the entire script into memory, so if it changes during execution there won't be any effect. Just wrap the script in {}, and use an explicit exit (inside the {}) so if anything gets added to the end of the file it won't be executed:
#!/bin/bash
{
# Actual script contents go here
exit
}
Warning: as I said, this works on the versions of bash I have tested it with. I make no promises about other versions, or other shells. Test it with the shell(s) you'll be using before putting it into production use.
Also, is there any risk that any of the other scripts will be running during the sync process? If so, you either need to use this trick with all of them, or else find some general way to detect which scripts are in use and defer updates on them until later.
So I ended up using the "at" utility, but only if the file changed. I have a ".cur" and ".new" version of the script on the local machine. If the MD5 is the same on both, I do nothing. If they are different, I wait until after the main script completes, then force copy the ".new" to the ".cur" in a different script.
I create the same lock file (name) for the update_script so another instance of the first script won't run if I'm changing it..
part in main script....
file1=`script_cur.sh`
file2=`script_new.sh`
if [ "$file1" == "$file2" ] ; then
echo "Files have the same content"
else
echo "Files are different, scheduling update_script.sh at command"
at -f update_script.sh now + 1 minute
fi

How to test things in crontab

This keeps happening to me all the time:
1) I write a script(ruby, shell, etc).
2) run it, it works.
3) put it in crontab so it runs in a few minutes so I know it runs from there.
4) It doesnt, no error trace, back to step 2 or 3 a 1000 times.
When I ruby script fails in crontab, I can't really know why it fails cause when I pipe output like this:
ruby script.rb >& /path/to/output
I sorta get the output of the script, but I don't get any of the errors from it and I don't get the errors coming from bash (like if ruby is not found or file isn't there)
I have no idea what environmental variables are set and whether or not it's a problem. Turns out that to run a ruby script from crontab you have to export a ton of environment variables.
Is there a way for me to just have crontab run a script as if I ran it myself from my terminal?
When debugging, I have to reset the timer and go back to waiting. Very time consuming.
How to test things in crontab better or avoid these problems?
"Is there a way for me to just have crontab run a script as if I ran it myself from my terminal?"
Yes:
bash -li -c /path/to/script
From the man page:
[vindaloo:pgl]:~/p/test $ man bash | grep -A2 -m1 -- -i
-i If the -i option is present, the shell is interactive.
-l Make bash act as if it had been invoked as a login shell (see
INVOCATION below).
G'day,
One of the basic problems with cron is that you get a minimal environment being set by cron. In fact, you only get four env. var's set and they are:
SHELL - set to /bin/sh
LOGNAME - set to your userid as found in /etc/passwd
HOME - set to your home dir. as found in /etc/passwd
PATH - set to "/usr/bin:/bin"
That's it.
However, what you can do is take a snapshot of the environment you want and save that to a file.
Now make your cronjob source a trivial shell script that sources this env. file and then executes your Ruby script.
BTW Having a wrapper source a common env. file is an excellent way to enforce a consistent environment for multiple cronjobs. This also enforces the DRY principle because it gives you just one point to update things as required, instead of having to search through a bunch of scripts and search for a specific string if, say, a logging location is changed or a different utility is now being used, e.g. gnutar instead of vanilla tar.
Actually, this technique is used very successfully with The Build Monkey which is used to implement Continuous Integration for a major software project that is common to several major world airlines. 3,500kSLOC being checked out and built several times a day and over 8,000 regression tests run once a day.
HTH
'Avahappy,
Run a 'set' command from inside of the ruby script, fire it from crontab, and you'll see exactly what's set and what's not.
To find out the environment in which cron runs jobs, add this cron job:
{ echo "\nenv\n" && env|sort ; echo "\nset\n" && set; } | /usr/bin/mailx -s 'my env' you#example.com
Or send the output to a file instead of email.
You could write a wrapper script, called for example rbcron, which looks something like:
#!/bin/bash
RUBY=ruby
export VAR1=foo
export VAR2=bar
export VAR3=baz
$RUBY "$*" 2>&1
This will redirect standard error from ruby to the standard output. Then you run rbcron in your cron job, and the standard output contains out+err of ruby, but also the "bash" errors existing from rbcron itself. In your cron entry, redirect 2>&1 > /path/to/output to get output+error messages to go to /path/to/output.
If you really want to run it as yourself, you may want to invoke ruby from a shell script that sources your .profile/.bashrc etc. That way it'll pull in your environment.
However, the downside is that it's not isolated from your environment, and if you change that, you may find your cron jobs suddenly stop working.

Run a list of bash scripts consecutively

I have a load of bash scripts that backup different directories to different locations. I want each one to run every day. However, I want to make they don't run simultaneously.
I've wrote a script that basically just calls each script in succession and sits in cron.daily, but I want a way for this script to work even if I add and remove backup scripts without having to manually edit it.
So what I need to go is generate a list of the scripts (e.g. "dir -1 /usr/bin/backup*.sh") and then run each script it finds in turn.
Thanks.
#!/bin/sh
for script in /usr/bin/backup*.sh
do
$script
done
#!/bin/bash
for SCRIPT in /usr/bin/backup*.sh
do
[ -x "$SCRIPT" ] && [ -f "$SCRIPT" ] && $SCRIPT
done
If your system has run-parts then that will take care of it for you. You can name your scripts like "10script", "20anotherscript" and they will be run in order in a manner similar to the rc*.d hierarchy (which is run via init or Upstart, however). On some systems it's a script. On mine it's a binary executable.
It is likely that your system is using it to run hourly, daily, etc., cron jobs just by dropping scripts into directories such as /etc/cron.hourly/
Pay particular attention, though, to how you name your scripts. (Don't use dots, for example.) Check the man page specific to your system, since file naming restrictions may vary.

Resources