How to check and kill cron job if particular cron running using shell script - bash

I have two cron jobs for importing image process into Database and scheduled that cron runs per two days once at server time 1 hour 2min. I need to check if the cron runs or not using shell script and kill that cron if the runs that cron already or after two days. Can you anybody guide me?
Example:
2 1 */2 * * cd /var/www/railsapp/book_app_v2 && /usr/local/bin/rake RAILS_ENV=production db:load_java_photo 2>&1 >> /var/www/railsapp/book_app/log/cron_book_photo.log

If I understand you right, you want to prevent cron overruns. Check out hatools, which addresses exactly that issue.
halockrun provides a simple and reliable way to implement a locking in shell scripts. A typical usage for halockrun is to prevent
cronjobs to run simultanously. halockrun's implementation makes it
very resilient to all kind of stale locks.
hatimerun provides a time-out mechanism that can be used from shell scripts. hatimerun can set multiple actions--signals to be
sent--on multiple timeouts.

Related

cron - Running multiple cron jobs located in etc/cron.d

In my docker container, I have multiple cron jobs that need to be ran, with each doing a different thing:
Some cron jobs will import a module
Some cron jobs will activate the imported module
Finally, some cron jobs will do a cleanup
This is the order in which they should run.
Currently, cron is not a running process in my container however, including the following in my entrypoint script has it running in the foreground:
#!/bin/bash
cron -n -s -m off
With this included, it does run the cron jobs. However, a key point is that each set of cron jobs (import, activate, cleanup) uses one of three bash scripts to perform their tasks. So, for example, the cron jobs that activate a module will have the following format:
MAILTO=""
* * * * * root /tmp/moduleActivation.sh <module> <version>
Each activation cron job will use a bash script that takes in two parameters and the bash script will activate it as so. The way these scripts are set up, only one instance can be running at any one time.
With my current setup, it will attempt to run each cron job at the same time, which is not my desired result as multiple instances of the activation script will attempt to run at the same time, which can't happen.
I am quite new to cron - how can I prevent this from happening and have the cron jobs run in the desired order? Will a crontab file take precedence in some way if I was to include one?

Script didn't Finish execution but cron job started again

i am trying to run a cron job which will execute my shell script, my shell script is having hive & pig scripts. I am setting the cron job to execute after every 2 mins but before my shell script is getting finish my cron job starts again is it going to effect my result or once the script finishes its execution then only it will start. I am in a bit of dilemma here. Please help.
Thanks
I think there are two ways to better resolve this, a long way and a short way:
Long way (probably most correct):
Use something like Luigi to manage job dependencies, then run that with Cron (it won't run more than one of the same job).
Luigi will handle all your job dependencies for you and you can make sure that a particular job only executes once. It's a little more work to get set-up, but it's really worth it.
Short Way:
Lock files have already been mentioned, but you can do this on HDFS too, that way it doesn't depend on where you run the cron job from.
Instead of checking for a lock file, put a flag on HDFS when you start and finish the job, and have this as a standard thing in all of your cron jobs:
# at start
hadoop fs -touchz /jobs/job1/2016-07-01/_STARTED
# at finish
hadoop fs -touchz /jobs/job1/2016-07-01/_COMPLETED
# Then check them (pseudocode):
if(!started && !completed): run_job; add_completed; remove_started
At the start of the script, have a check:
#!/bin/bash
if [ -e /tmp/file.lock ]; then
rm /tmp/file.lock # removes the lock and continue
else
exit # No lock file exists, which means prev execution has not completed.
fi
.... # Your script here
touch /tmp/file.lock
There are many others ways of achieving the same. I am giving a simple example.

Why does scheduling Spark jobs through cron fail (while the same command works when executed on terminal)?

I am trying to schedule a spark job using cron.
I have made a shell script and it executes well on the terminal.
However, when I execute the script using cron it gives me insufficient memory to start JVM thread error.
Every time I start the script using terminal there is no issue. This issue comes when the script starts with cron.
Kindly if you could suggest something.

Simple script run via cronjob doesn't work but works from shell

I am on shared hosting and I'm trying to schedule cronjob to run every now and then. Via cPanel I scheduled to execute my script but even though that according to my host support the cronjob runs, the script doesn't seem as doing anything. The cron job command I set via cPanel is:
/bin/sh /home1/myusername/public_html/somefolder/cronjob2.sh
and the cronjob2.sh
#!/bin/bash
/home1/myusername/public_html/somefolder/node_modules/forever/bin/forever stop 0
when via SSH I execute:
/home1/myusername/public_html/somefolder/cronjob2.sh
it stops forever process as needed. From cronjob doesn't do anything.
How can I get this working?
EDIT:
So I've tried:
/bin/sh /home1/username/public_html/somefolder/cronjob2.sh >> /tmp/mylog 2>&1
and mylog entries say:
/usr/bin/env: node: No such file or directory
It seems that forever needs to run node and this cannot be found. How would I possibly fix this?
EDIT2:
Accepted answer at superuser.com. Thank you all for help
https://superuser.com/questions/763261/simple-script-run-via-cronjob-doesnt-work-but-works-from-shell/763288#763288
For cron job lines in a crontab it's not required to specify kind of shell or e.g. of perl.
It's enough, that your script contains
shebang
line.
Therefore you should remove /bin/sh from your cron job line.
Another aspect, that might cause a different behavior of your script by interactive start and by cron daemon start is possible different environment, first of all the PATH variable. Therefore check, if you script is able to be executed in very restricted environment, that is provided by cron daemon. You can determine your cron job environment experimentally by start of temporary cron job, that executes "env" command and writes its output to a file.
Once more aspect: Have you redirected STDOUT and STDERR of the cron job to a log file and read its content to analyze the issue? You can do it as follows:
your_cron_job >/tmp/any_name.log 2>&1
According to what you wrote, when you run your script via SSH, you are using bash, because this line is the first of your script:
#!/bin/bash
However, in the crontab, you are forcing the use of sh instead of bash. Are you sure your script is fully compatible with sh? Otherwise, simply replace /bin/sh with /bin/bash in your cron command and test again.

Creating more permanent crontab files

I just recently asked this question: https://stackoverflow.com/questions/6359367/running-a-bash-program-every-day-at-the-same-time
The solution of using crontab -e to create a job worked very well and my script worked fine.
However, I found that once I exited the terminal, that job was deleted. How can I create a job mediated by cron that will work every day at the same regardless of if I exit the terminal or even turn off my computer (assuming my computer is turned back when the cron job is scheduled to execute)
cron is permanent. So the accepted answer given in the linked question would run the script at 7 AM everyday. It has nothing to do with if you are logged in or not.

Resources