In my docker container, I have multiple cron jobs that need to be ran, with each doing a different thing:
Some cron jobs will import a module
Some cron jobs will activate the imported module
Finally, some cron jobs will do a cleanup
This is the order in which they should run.
Currently, cron is not a running process in my container however, including the following in my entrypoint script has it running in the foreground:
#!/bin/bash
cron -n -s -m off
With this included, it does run the cron jobs. However, a key point is that each set of cron jobs (import, activate, cleanup) uses one of three bash scripts to perform their tasks. So, for example, the cron jobs that activate a module will have the following format:
MAILTO=""
* * * * * root /tmp/moduleActivation.sh <module> <version>
Each activation cron job will use a bash script that takes in two parameters and the bash script will activate it as so. The way these scripts are set up, only one instance can be running at any one time.
With my current setup, it will attempt to run each cron job at the same time, which is not my desired result as multiple instances of the activation script will attempt to run at the same time, which can't happen.
I am quite new to cron - how can I prevent this from happening and have the cron jobs run in the desired order? Will a crontab file take precedence in some way if I was to include one?
Related
I have the following cron tasks:
/etc/cron.d/mongo
/etc/cron.d/elastic
These cron jobs only executes scripts located here:
/etc/script/mongo
/etc/script/elastic
These tasks execute every 30 minuts, that's the following cron format:
0,30 * * * *
I don't know why but these tasks aren't executing all times. In 2 hours, for example, they only execute 2-3 times and not 4. These tasks are performing backups, I need to be sure that they execute every 30 minuts.
Why this is happening?
PD: If i execute crontab -e the file is empty, this may cause any problem?
I am trying to schedule a spark job using cron.
I have made a shell script and it executes well on the terminal.
However, when I execute the script using cron it gives me insufficient memory to start JVM thread error.
Every time I start the script using terminal there is no issue. This issue comes when the script starts with cron.
Kindly if you could suggest something.
I am on shared hosting and I'm trying to schedule cronjob to run every now and then. Via cPanel I scheduled to execute my script but even though that according to my host support the cronjob runs, the script doesn't seem as doing anything. The cron job command I set via cPanel is:
/bin/sh /home1/myusername/public_html/somefolder/cronjob2.sh
and the cronjob2.sh
#!/bin/bash
/home1/myusername/public_html/somefolder/node_modules/forever/bin/forever stop 0
when via SSH I execute:
/home1/myusername/public_html/somefolder/cronjob2.sh
it stops forever process as needed. From cronjob doesn't do anything.
How can I get this working?
EDIT:
So I've tried:
/bin/sh /home1/username/public_html/somefolder/cronjob2.sh >> /tmp/mylog 2>&1
and mylog entries say:
/usr/bin/env: node: No such file or directory
It seems that forever needs to run node and this cannot be found. How would I possibly fix this?
EDIT2:
Accepted answer at superuser.com. Thank you all for help
https://superuser.com/questions/763261/simple-script-run-via-cronjob-doesnt-work-but-works-from-shell/763288#763288
For cron job lines in a crontab it's not required to specify kind of shell or e.g. of perl.
It's enough, that your script contains
shebang
line.
Therefore you should remove /bin/sh from your cron job line.
Another aspect, that might cause a different behavior of your script by interactive start and by cron daemon start is possible different environment, first of all the PATH variable. Therefore check, if you script is able to be executed in very restricted environment, that is provided by cron daemon. You can determine your cron job environment experimentally by start of temporary cron job, that executes "env" command and writes its output to a file.
Once more aspect: Have you redirected STDOUT and STDERR of the cron job to a log file and read its content to analyze the issue? You can do it as follows:
your_cron_job >/tmp/any_name.log 2>&1
According to what you wrote, when you run your script via SSH, you are using bash, because this line is the first of your script:
#!/bin/bash
However, in the crontab, you are forcing the use of sh instead of bash. Are you sure your script is fully compatible with sh? Otherwise, simply replace /bin/sh with /bin/bash in your cron command and test again.
I have two cron jobs for importing image process into Database and scheduled that cron runs per two days once at server time 1 hour 2min. I need to check if the cron runs or not using shell script and kill that cron if the runs that cron already or after two days. Can you anybody guide me?
Example:
2 1 */2 * * cd /var/www/railsapp/book_app_v2 && /usr/local/bin/rake RAILS_ENV=production db:load_java_photo 2>&1 >> /var/www/railsapp/book_app/log/cron_book_photo.log
If I understand you right, you want to prevent cron overruns. Check out hatools, which addresses exactly that issue.
halockrun provides a simple and reliable way to implement a locking in shell scripts. A typical usage for halockrun is to prevent
cronjobs to run simultanously. halockrun's implementation makes it
very resilient to all kind of stale locks.
hatimerun provides a time-out mechanism that can be used from shell scripts. hatimerun can set multiple actions--signals to be
sent--on multiple timeouts.
I have a question.
I want to run more instance of same job in parallel from within a script: I have a loop in which I invoke jobs with dsjob and without option "-wait" and "-jobstatus".
I want that jobs completed before script termination, but I don't know how to verify if job instance terminated.
I though to use wait command but it is not appropriate.
Thanks in advance
First,you should assure job compile option "Allow Multiple Instance" choose.
Second:
#!/bin/bash
. /home/dsadm/.bash_profile
INVOCATION=(1 2 3 4 5)
cd $DSHOME/bin
for id in ${INVOCATION[#]}
do
./dsjob -run -mode NORMAL -wait test demo.$id
done
project -- test
job -- demo
$id -- invocation id
the two line in shell scipt:guarantee the environment path can work.
Run the jobs like you say without the -wait, and then loop around running dsjob -jobinfo and parse the output for a job status of 1 or 2. When all jobs return this status, they are all finished.
You might find, though, that you check the status of the job before it actually starts running and you might pick up an old status. You might be able to fix this by first resetting the job instance and waiting for a status of "Not running", prior to running the job.
Invoke the jobs in loop without wait or job-status option
after your loop , check the jobs status by dsjob command
Example - dsjob -jobinfo projectname jobname.invocationid
you can code one more loop for this also and use sleep command under that
write yours further logic as per status of the jobs
but its good to create Job Sequence to invoke this multi-instance job simultaneously with the help of different invoaction-ids
create a sequence job if these are in same process
create different sequences or directly create different scripts to trigger these jobs simultaneously with invocation- ids and schedule in same time.
Best option create a standard generalized script where each thing will be getting created or getting value as per input command line parameters
Example - log files on the basis of jobname + invocation-id
then schedule the same script for different parameters or invocations .