i used this bash script to check services running or not if running the script will exit if else it will run another script which gona execute some commands
and once done it will be exit
my issue is when i run my script manuly its works fine but when i run it with cron its not running and not executed correctly here is my script
#!/bin/sh
SERVICE='loop2.sh'
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
echo "$SERVICE service running, everything is fine"
else
/home//www/loop2.sh
fi
any adjust to my script to be working fine in cron
You're not being very specific. What error are you seeing ?
Note that processes run under cron with a cut-down environment. In particular environment variables such as PATH will be much reduced from your interactive shell.
log your scripts stdout/stderr e.g. myscript 2>&1 >/tmp/script.log
check your environment is as expected via the env command in your script
does this script really do what you want ? And interact with cron how you wish ? If your service isn't running you spawn a new one, but I'd expect you to put it in the background, thus making it a daemon and not a (grand)child of the cron process
is your script executable by whichever user it's executed by under cron ?
Related
I am writing a bash script and am checking whether the application is running. If it is not running it should be started in a separate process (not a child process). If it is running, the window should be maximized. I kind of made it but the new process terminates shortly after being started, probably because the script process ends.
#!/bin/bash
if (ps aux | grep App1 | grep -v grep > /dev/null)
then
echo App1 is running
wmctrl -x -r WMClassOfApp1 -b "add,maximized_vert,maximized_horz"
else
echo App1 is not running
sh -c /usr/bin/app1 & disown # This app should be started in a separate process and not terminate
fi
I probably have to add that I am calling this script from a udev rule. When I execute it in a terminal, it works fine. When I call it from the udev rule, the app1 terminates.
A bash script is not the right solution for this: best is to add this to crontab of your system.
I need to run rsync in background through shell script but once it has started, I need to monitor the status of that jobs through shell.
jobs command return empty when its run in shell after the script exits. ps -ef | grep rsync shows that the rsync is still running.
I can check the status through script but I need to run the script multiple times so it uses a different ip.txt file to push. So I can't have the script running to check jobs status.
Here is the script:
for i in `cat $ip.txt`; do
rsync -avzh $directory/ user#"$i":/cygdrive/c/test/$directory 2>&1 > /dev/null &
done
jobs; #shows the jobs status while in the shell script.
exit 1
Output of jobs command is empty after the shell script exits:
root#host001:~# jobs
root#host001:~#
What could be the reason and how could I get the status of jobs while the rsync is running in background? I can't find an article online related to this.
Since your shell (the one from which you execute jobs) did not start rsync, it doesn't know anything about it. There are different approaches to fixing that, but it boils down to starting the background process from your shell. For example, you can start the script you have using the source BASH command instead of executing it in a separate process. Of course, you'd have to remove the exit 1 at the end, because that exits your shell otherwise.
I need to start a couple of processes locally in multiple command-prompt windows, to make it simple, I have written a shell script say abc.sh to run in git-bash which has below commands:
cd "<target_dir1>"
<my_command1> &>> output.log &
cd "<target_dir2>"
<my_command2> &>> output.log &
when I run these commands in git bash I get jobs running in the background, which can be seen using jobs and kill command, however when I run them through abc.sh, I get my processes running in the background, but the git-bash instance disowns them, now I can no longer see them using jobs.
how can I get them run through the abc.sh file and also able to see them in jobs list?
I am on shared hosting and I'm trying to schedule cronjob to run every now and then. Via cPanel I scheduled to execute my script but even though that according to my host support the cronjob runs, the script doesn't seem as doing anything. The cron job command I set via cPanel is:
/bin/sh /home1/myusername/public_html/somefolder/cronjob2.sh
and the cronjob2.sh
#!/bin/bash
/home1/myusername/public_html/somefolder/node_modules/forever/bin/forever stop 0
when via SSH I execute:
/home1/myusername/public_html/somefolder/cronjob2.sh
it stops forever process as needed. From cronjob doesn't do anything.
How can I get this working?
EDIT:
So I've tried:
/bin/sh /home1/username/public_html/somefolder/cronjob2.sh >> /tmp/mylog 2>&1
and mylog entries say:
/usr/bin/env: node: No such file or directory
It seems that forever needs to run node and this cannot be found. How would I possibly fix this?
EDIT2:
Accepted answer at superuser.com. Thank you all for help
https://superuser.com/questions/763261/simple-script-run-via-cronjob-doesnt-work-but-works-from-shell/763288#763288
For cron job lines in a crontab it's not required to specify kind of shell or e.g. of perl.
It's enough, that your script contains
shebang
line.
Therefore you should remove /bin/sh from your cron job line.
Another aspect, that might cause a different behavior of your script by interactive start and by cron daemon start is possible different environment, first of all the PATH variable. Therefore check, if you script is able to be executed in very restricted environment, that is provided by cron daemon. You can determine your cron job environment experimentally by start of temporary cron job, that executes "env" command and writes its output to a file.
Once more aspect: Have you redirected STDOUT and STDERR of the cron job to a log file and read its content to analyze the issue? You can do it as follows:
your_cron_job >/tmp/any_name.log 2>&1
According to what you wrote, when you run your script via SSH, you are using bash, because this line is the first of your script:
#!/bin/bash
However, in the crontab, you are forcing the use of sh instead of bash. Are you sure your script is fully compatible with sh? Otherwise, simply replace /bin/sh with /bin/bash in your cron command and test again.
i am using shell script to monitor the working of a php script. My aim is that this php script should not sleep / terminated and must always be running.The code i used is -
ps aux | grep -v grep | grep -q $file || ( nohup php -f $file -print > /var/log/file.log & )
now this idea would not work for cases if the php script got terminated(process status code T). Any idea to handle that case. can such processes be killed permanently and then restarted.
How about just restarting the php interpreter when it dies?
while true ; do php -f $file -print >> /var/log/file.log ; done
Of course, someone could send the script a SIGSTOP, SIGTSTP, SIGTTIN, SIGTTOU to cause it to hang, but perhaps that person has a really good reason. You can block them all except SIGSTOP, so maybe that's alright.
Or if the script does something like call read(2) on a device or socket that will never return, this won't really ensure the 'liveness' of your script. (But then you'd use non-blocking IO to prevent this situation, so that's covered.)
Oh yes, you could also stuff it into your /etc/inittab. But I'm not giving you more than a hint about this one, because I think it is probably a bad idea.
And there are many similar tools that already exist: daemontools and Linux Heartbeat are the first two to come to mind.
If the script is exiting after it's been terminater, or if it crashes out, and needs to be restarted, a simple shell script can take care of that.
#!/bin/bash
# runScript.sh - keep a php script running
php -q -f ./cli-script.php -- $#
exec $0 $#;
The exec $0 re-runs the shell script, with the parameters it was given.
To run in the background you can nohup runScript.sh or run it via init.d scripts, upstart, runit or supervisord, among others.