Continue script after reboot with crontab in a terminal [duplicate] - bash

This question already has answers here:
How to have a Shell script continue after reboot?
(3 answers)
Closed 3 years ago.
I have a bash script to install some stuff in linux. The install script needs to be run as root. The installation process reboots twice and continues after each reboot.
I managed to manipulate the crontab to add/remove jobs to get that working. However, I would like the user to be informed if the install script has finished or not, so he/she can wait until the last reboot has finished.
The only solution I could think of was to run the crontab job in an open terminal, so the user can see the installation is still in progress.
Question 1: Is this a good solution? Any alternative?
Question 2: If the solution is good, how can I make sure a terminal is opened and the crontab job is run in that terminal?

Cron jobs are executed without any attached terminal. You'll have to create one in your cron script, and redirect all output from your script's commands to it. Maybe the simplest option is to redirect your script's output to a logfile, and open a terminal which just does tail -f <logfile>. You can then kill the terminal when your script is complete. If you're using xterm (as an example), you can do xterm -e "tail -f logfile.txt".

Related

Execute bash script that will continue though Apache restarts

I need to have a bash script triggered and run, but part of the script requires Apache to restart. This obviously kills the script from continuing. I can't move the restarts in the script to the end
I have tried to run the bash scrip though a php script using shell_exec() in a GNU screen session to keep it going but that doesn't work. as soon as Apache goes down the script stops.
There has to be a way to do this but I'm not seeing it.
How I can accomplish this?
Does nohup do the job?
nohup is a POSIX command which means "no hang up". Its purpose is to execute a command such that it ignores the HUP (hangup) signal and therefore does not stop when the user logs out.
Output that would normally go to the terminal goes to a file called nohup.out, if it has not already been redirected.
https://en.wikipedia.org/wiki/Nohup

Run shell script without close the previous process

I got stuck in this problem. I need to run two commands in shell script, but they can't stop each other.
For example, this shell script:
psql database user &
gedit file
If I run these commands up, only the gedit process stays open and I can't see where the process of psql is.
But if I do this:
gedit file &
psql database user
I can see the psql's process, but it's closed by messages of gedit's process.
How can I execute this script without one process close the other?
If you want to suppress output from gedit:
gedit file >/dev/null 2>&1 &
psql database user
However, the claim:
I can see the psql's process, but it's closed by messages of gedit's process.
...simply doesn't happen: Messages from gedit go directly to the terminal; psql can't see them, so it can't possibly be exiting because of them.

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

Simple script run via cronjob doesn't work but works from shell

I am on shared hosting and I'm trying to schedule cronjob to run every now and then. Via cPanel I scheduled to execute my script but even though that according to my host support the cronjob runs, the script doesn't seem as doing anything. The cron job command I set via cPanel is:
/bin/sh /home1/myusername/public_html/somefolder/cronjob2.sh
and the cronjob2.sh
#!/bin/bash
/home1/myusername/public_html/somefolder/node_modules/forever/bin/forever stop 0
when via SSH I execute:
/home1/myusername/public_html/somefolder/cronjob2.sh
it stops forever process as needed. From cronjob doesn't do anything.
How can I get this working?
EDIT:
So I've tried:
/bin/sh /home1/username/public_html/somefolder/cronjob2.sh >> /tmp/mylog 2>&1
and mylog entries say:
/usr/bin/env: node: No such file or directory
It seems that forever needs to run node and this cannot be found. How would I possibly fix this?
EDIT2:
Accepted answer at superuser.com. Thank you all for help
https://superuser.com/questions/763261/simple-script-run-via-cronjob-doesnt-work-but-works-from-shell/763288#763288
For cron job lines in a crontab it's not required to specify kind of shell or e.g. of perl.
It's enough, that your script contains
shebang
line.
Therefore you should remove /bin/sh from your cron job line.
Another aspect, that might cause a different behavior of your script by interactive start and by cron daemon start is possible different environment, first of all the PATH variable. Therefore check, if you script is able to be executed in very restricted environment, that is provided by cron daemon. You can determine your cron job environment experimentally by start of temporary cron job, that executes "env" command and writes its output to a file.
Once more aspect: Have you redirected STDOUT and STDERR of the cron job to a log file and read its content to analyze the issue? You can do it as follows:
your_cron_job >/tmp/any_name.log 2>&1
According to what you wrote, when you run your script via SSH, you are using bash, because this line is the first of your script:
#!/bin/bash
However, in the crontab, you are forcing the use of sh instead of bash. Are you sure your script is fully compatible with sh? Otherwise, simply replace /bin/sh with /bin/bash in your cron command and test again.

Creating more permanent crontab files

I just recently asked this question: https://stackoverflow.com/questions/6359367/running-a-bash-program-every-day-at-the-same-time
The solution of using crontab -e to create a job worked very well and my script worked fine.
However, I found that once I exited the terminal, that job was deleted. How can I create a job mediated by cron that will work every day at the same regardless of if I exit the terminal or even turn off my computer (assuming my computer is turned back when the cron job is scheduled to execute)
cron is permanent. So the accepted answer given in the linked question would run the script at 7 AM everyday. It has nothing to do with if you are logged in or not.

Resources