Creating more permanent crontab files - bash

I just recently asked this question: https://stackoverflow.com/questions/6359367/running-a-bash-program-every-day-at-the-same-time
The solution of using crontab -e to create a job worked very well and my script worked fine.
However, I found that once I exited the terminal, that job was deleted. How can I create a job mediated by cron that will work every day at the same regardless of if I exit the terminal or even turn off my computer (assuming my computer is turned back when the cron job is scheduled to execute)

cron is permanent. So the accepted answer given in the linked question would run the script at 7 AM everyday. It has nothing to do with if you are logged in or not.

Related

Continue script after reboot with crontab in a terminal [duplicate]

This question already has answers here:
How to have a Shell script continue after reboot?
(3 answers)
Closed 3 years ago.
I have a bash script to install some stuff in linux. The install script needs to be run as root. The installation process reboots twice and continues after each reboot.
I managed to manipulate the crontab to add/remove jobs to get that working. However, I would like the user to be informed if the install script has finished or not, so he/she can wait until the last reboot has finished.
The only solution I could think of was to run the crontab job in an open terminal, so the user can see the installation is still in progress.
Question 1: Is this a good solution? Any alternative?
Question 2: If the solution is good, how can I make sure a terminal is opened and the crontab job is run in that terminal?
Cron jobs are executed without any attached terminal. You'll have to create one in your cron script, and redirect all output from your script's commands to it. Maybe the simplest option is to redirect your script's output to a logfile, and open a terminal which just does tail -f <logfile>. You can then kill the terminal when your script is complete. If you're using xterm (as an example), you can do xterm -e "tail -f logfile.txt".

running shell script with windows task scheduler

I currenty have a simple shell script that I created for a linux machine to be run using cron, but now I want to be able to run the file using windows task scheduler. I have tried to get it to work using cron for cygwin, but even after running cron-config successfully and ensuring that the shell script can be executed successfully, for some reason the cron task simply wasn't executing. So I decided to give in and use the windows task scheduler. In order to do this, I looked at the following posts about the issue:
Cgywin .sh file run as Windows Task Scheduler
http://www.davidjnice.com/cygwin_scheduled_tasks.html
in my case, the entry in the "actions" tab of the new task looks like this:
program/script: c:\cygwin64\bin\bash.exe
arguments: -l -c "/cygdrive/c/users/paul/bitcoinbot/download_all_data.sh >> cygdrive/c/users/paul/bitcoinbot/logfile.log 2>&1"
start in: c:\cygwin64\bin
Notice that I redirected the output of the shell script to a log file, so that I should be able to see there whether the program run. Other than that, I simply edited the "trigger" tab to run the task daily, and set the time to a couple of minutes in the fture to see whether it ran successfully.
Alas, when I look at the detailed event history for the task, nothing changes when the trigger time passes. And when I manually "run" the task, the event history seems to add a few different events, but the task is completed within seconds, whereas this task should take over an hour (and it does when the shell script is executed directly from the terminal). And when I look for the log file that should have been created, there is nothing.
Does anyone have any idea what might be the issue here? How can I get my task to run properly at the trigger time, and how can I make sure it does so?
Best,
Paul
EDIT:
here are the pictures showing event history, as per Ken White's request.
Please ignore the fact that it says there are 24 events. These are from multiple separate runs of the task. The events shown here are a complete list of the events triggered by a single run.
EDIT 2:
Regarding my attempts to get cron to work, I have run into the following problem when I try to start the cron service using cygrunsrv. First of all, I tried to start cron by typing
cygrunsrv -I cron -p /usr/sbin/cron.exe -a -D
Now when I type
$cygrunsrv -Q cron
Service: cron
Current State: stopped
Command: /usr/bin/cron.exe
Now, I tried to start the cron service by typing
cygrunsrv -S cron
Cygrunsrv: Error starting a service: QueryServiceStatus: Win32 error 1062:
The service has not been started.
Does anyone hae any idea what this error means? I tried googling it, but couldn't find any answers.

ubuntu cron job stopped working

The cron job used to working well and suddenly stopped working
1 * * * * /usr/bin/python3 /home/roy/update.py
It can still run manually on the command line.
Then I tried to debug it by the following command:
/usr/bin/python3 /home/roy/update.py 2>&1 >> /home/roy/cron_error_report.txt
There is no error shown in the cron_error_report.txt either.
Can anybody help me?
Make sure cron is running
sudo service cron status
I hope my answer can help others. It really take a long time to figure it out.
I moved a file used for my current python program to a shared folder. I exported the shared folder to PYTHONPATH.
So there is no problem while I run the script in the command line. However, cron could not run it. So I have to move the file back to my current folder and the cron starts to work again.

Simple script run via cronjob doesn't work but works from shell

I am on shared hosting and I'm trying to schedule cronjob to run every now and then. Via cPanel I scheduled to execute my script but even though that according to my host support the cronjob runs, the script doesn't seem as doing anything. The cron job command I set via cPanel is:
/bin/sh /home1/myusername/public_html/somefolder/cronjob2.sh
and the cronjob2.sh
#!/bin/bash
/home1/myusername/public_html/somefolder/node_modules/forever/bin/forever stop 0
when via SSH I execute:
/home1/myusername/public_html/somefolder/cronjob2.sh
it stops forever process as needed. From cronjob doesn't do anything.
How can I get this working?
EDIT:
So I've tried:
/bin/sh /home1/username/public_html/somefolder/cronjob2.sh >> /tmp/mylog 2>&1
and mylog entries say:
/usr/bin/env: node: No such file or directory
It seems that forever needs to run node and this cannot be found. How would I possibly fix this?
EDIT2:
Accepted answer at superuser.com. Thank you all for help
https://superuser.com/questions/763261/simple-script-run-via-cronjob-doesnt-work-but-works-from-shell/763288#763288
For cron job lines in a crontab it's not required to specify kind of shell or e.g. of perl.
It's enough, that your script contains
shebang
line.
Therefore you should remove /bin/sh from your cron job line.
Another aspect, that might cause a different behavior of your script by interactive start and by cron daemon start is possible different environment, first of all the PATH variable. Therefore check, if you script is able to be executed in very restricted environment, that is provided by cron daemon. You can determine your cron job environment experimentally by start of temporary cron job, that executes "env" command and writes its output to a file.
Once more aspect: Have you redirected STDOUT and STDERR of the cron job to a log file and read its content to analyze the issue? You can do it as follows:
your_cron_job >/tmp/any_name.log 2>&1
According to what you wrote, when you run your script via SSH, you are using bash, because this line is the first of your script:
#!/bin/bash
However, in the crontab, you are forcing the use of sh instead of bash. Are you sure your script is fully compatible with sh? Otherwise, simply replace /bin/sh with /bin/bash in your cron command and test again.

How to check and kill cron job if particular cron running using shell script

I have two cron jobs for importing image process into Database and scheduled that cron runs per two days once at server time 1 hour 2min. I need to check if the cron runs or not using shell script and kill that cron if the runs that cron already or after two days. Can you anybody guide me?
Example:
2 1 */2 * * cd /var/www/railsapp/book_app_v2 && /usr/local/bin/rake RAILS_ENV=production db:load_java_photo 2>&1 >> /var/www/railsapp/book_app/log/cron_book_photo.log
If I understand you right, you want to prevent cron overruns. Check out hatools, which addresses exactly that issue.
halockrun provides a simple and reliable way to implement a locking in shell scripts. A typical usage for halockrun is to prevent
cronjobs to run simultanously. halockrun's implementation makes it
very resilient to all kind of stale locks.
hatimerun provides a time-out mechanism that can be used from shell scripts. hatimerun can set multiple actions--signals to be
sent--on multiple timeouts.

Resources