Crontab closing background process? - bash

I am testing a bash script I hope to run as a cron job to scan a download log and perform labor-intensive conversions on image files. In order to run several conversions at once, the first script loops through the download log and sends the filename to the second script, which I set to run as a background process using &.
The script pair works well, but when the process is complete, I must press the enter key to return to a command prompt. This is a non-issue when I am running a test, but I am not sure if this behavior has ramifications when run as a cron job.
Will this be an issue? If so, is there a way to close the "terminal" running the first script from the crontab?
Here's a truncated form of my code:
Script 1 (to be launched by crontab):
for i in file1 file2 file3 etc
do
bash /path/to/convert.sh $i &
done
exit 0
Script 2 (convert.sh)
fileName=${1?no file given}
jpegName=$(echo $fileName | sed s/tif/jpg/g)
convert $fileName $jpegName
exit 0
Thanks for any help/assurances you can give!

you don't need script 2. you can convert it to function and put it inside script1.
Another problem is you are running convert.sh in an uncontrolled way. You cannot foresee how many processes will be created (background processes) and this may lead to severe performance overheads.
finally, if you cannot end process in normal way, you may choose to terminate it again using cron by issueing pkill script1.sh

Related

Bash script is waiting to open second file in gedit until I close the first one [duplicate]

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

How can I tell if a script was run in the background and with nohup?

Ive got a script that takes a quite a long time to run, as it has to handle many thousands of files. I want to make this script as fool proof as possible. To this end, I want to check if the user ran the script using nohup and '&'. E.x.
me#myHost:/home/me/bin $ nohup doAlotOfStuff.sh &. I want to make 100% sure the script was run with nohup and '&', because its a very painful recovery process if the script dies in the middle for whatever reason.
How can I check those two key paramaters inside the script itself? and if they are missing, how can I stop the script before it gets any farther, and complain to the user that they ran the script wrong? Better yet, is there way I can force the script to run in nohup &?
Edit: the server enviornment is AIX 7.1
The ps utility can get the process state. The process state code will contain the character + when running in foreground. Absence of + means code is running in background.
However, it will be hard to tell whether the background script was invoked using nohup. It's also almost impossible to rely on the presence of nohup.out as output can be redirected by user elsewhere at will.
There are 2 ways to accomplish what you want to do. Either bail out and warn the user or automatically restart the script in background.
#!/bin/bash
local mypid=$$
if [[ $(ps -o stat= -p $mypid) =~ "+" ]]; then
echo Running in foreground.
exec nohup $0 "$#" &
exit
fi
# the rest of the script
...
In this code, if the process has a state code +, it will print a warning then restart the process in background. If the process was started in the background, it will just proceed to the rest of the code.
If you prefer to bailout and just warn the user, you can remove the exec line. Note that the exit is not needed after exec. I left it there just in case you choose to remove the exec line.
One good way to find if a script is logging to nohup, is to first check that the nohup.out exists, and then to echo to it and ensure that you can read it there. For example:
echo "complextag"
if ( $(cat nohup.out | grep "complextag" ) != "complextag" );then
# various commands complaining to the user, then exiting
fi
This works because if the script's stdout is going to nohup.out, where they should be going (or whatever out file you specified), then when you echo that phrase, it should be appended to the file nohup.out. If it doesn't appear there, then the script was nut run using nohup and you can scold them, perhaps by using a wall command on a temporary broadcast file. (if you want me to elaborate on that I can).
As for being run in the background, if it's not running you should know by checking nohup.

How to make bash interpreter stop until a command is finished?

I have a bash script with a loop that calls a hard calculation routine every iteration. I use the results from every calculation as input to the next. I need make bash stop the script reading until every calculation is finished.
for i in $(cat calculation-list.txt)
do
./calculation
(other commands)
done
I know the sleep program, and i used to use it, but now the time of the calculations varies greatly.
Thanks for any help you can give.
P.s>
The "./calculation" is another program, and a subprocess is opened. Then the script passes instantly to next step, but I get an error in the calculation because the last is not finished yet.
If your calculation daemon will work with a precreated empty logfile, then the inotify-tools package might serve:
touch $logfile
inotifywait -qqe close $logfile & ipid=$!
./calculation
wait $ipid
(edit: stripped a stray semicolon)
if it closes the file just once.
If it's doing an open/write/close loop, perhaps you can mod the daemon process to wrap some other filesystem event around the execution? `
#!/bin/sh
# Uglier, but handles logfile being closed multiple times before exit:
# Have the ./calculation start this shell script, perhaps by substituting
# this for the program it's starting
trap 'echo >closed-on-calculation-exit' 0 1 2 3 15
./real-calculation-daemon-program
Well, guys, I've solved my problem with a different approach. When the calculation is finished a logfile is created. I wrote then a simple until loop with a sleep command. Although this is very ugly, it works for me and it's enough.
for i in $(cat calculation-list.txt)
do
(calculations routine)
until [[ -f $logfile ]]; do
sleep 60
done
(other commands)
done
Easy. Get the process ID (PID) via some awk magic and then use wait too wait for that PID to end. Here are the details on wait from the advanced Bash scripting guide:
Suspend script execution until all jobs running in background have
terminated, or until the job number or process ID specified as an
option terminates. Returns the exit status of waited-for command.
You may use the wait command to prevent a script from exiting before a
background job finishes executing (this would create a dreaded orphan
process).
And using it within your code should work like this:
for i in $(cat calculation-list.txt)
do
./calculation >/dev/null 2>&1 & CALCULATION_PID=(`jobs -l | awk '{print $2}'`);
wait ${CALCULATION_PID}
(other commands)
done

How to Parse Values from output in BASH

I'm writing a script that should create a rotating series of debug logs as it runs over a period of time. My current problem is that when I ran it with -vx attached, I can see that it stops during the actual debugging process and doesn't proceed through the loop. This is reflective of how the command would run normally. So I thought to continue the process, I want to run with &.
The problem is that this will become exponentially messier over time (since none of the processes are stopping). So what I'm looking for is a way to parse the PID output of the & command into a variable, and then I will add a kill command at the start of the loop pointed at that variable.
Figuring out how to parse the output of commands will also be useful in the other part of my project, which is to terminate the while loop based on a particular % free in a df -h for a select partition
No parsing needed. The PID of the most recent background process is stored in $!.
command & # run command in background
pid=$! # save pid as $pid
...
kill $pid # kill command

Return code of shell start script which launches task on background

I'm seeking for an advice regarding the best practice of starting (java) programs from shell scripts.
Currently the practice within our firm seems to be having a shell script which sets all the environment variables and launches the java (which is not important in this case) process on background similar to:
nohup $JAVA_CMD > $LOG_DIR/$LOG_FILE 2>&1 &
which is the last line of the script. We are launching single process.
One doubt I have is that return code of such shell process is always 0 even for the case when the program fails to start due to some Exception/Error. This makes it hard for monitoring tools - they can't rely on the shell exit code for example.
I found out this can be fixed by waiting for the process to end like:
nohup $JAVA_CMD > $LOG_DIR/$LOG_FILE 2>&1 &
wait $!
But my understanding is that this makes the last & completely useless since running:
nohup $JAVA_CMD > $LOG_DIR/$LOG_FILE 2>&1
will have the same effect.
So my question is: what is the best practice of launching programs from shell? Does the running on background have some benefits I'm overlooking?
You should look into at and batch, and possibly cron. These are all tools to run commands scripts, job streams non-interactively. at runs a job then emails stderr output back to the user - default behavior.
at -k now <<!
$JAVA_CMD > $LOG_DIR/$LOG_FILE 2>&1
!
The batch command will let you write a series of commands to a file, then execute the file as if it were stdin, you can also do this interactively.
cron jobs (crontab) run at specified times and dates, like every Monday at 0200. This does not seem to fit your question.
Try this:
http://www.thegeekstuff.com/2010/06/at-atq-atrm-batch-command-examples/

Resources