I have the following ksh script:
sqlplus usr1/pw1#DB1 #$DIR/a.sql $1 &
sqlplus usr2/pw2#DB2 #$DIR/b.sql $1 &
wait
echo "Done!"
Where $DIR is a variable with the absolute path where a.sql and b.sql are.
For some time, I've been running this script daily and it works fine.
The intentions is that both SQL*Plus sessions go to the background and
execute in parallel, and when they finish I can continue with the
following steps of the application.
Since it's not a test version anymore, I scheduled it on the crontab
to execute daily. The problem I have now is that it wont pause on
"wait" and let the sqlplus sessions finish, but it directly outputs
"Done!". In the real app, that "echo Done!" is actually a call to
another program to do some processing on a.sql's and b.sql's output.
But since it's not waiting for both sql scripts to actually finish,
the processing cannot be done.
It works absolutely perfect when I run it myself, wether I do it from
the local directory or from the root (as crontab would do it). But
when it is executed automatically by the crontab, I't doesn't stop at
wait and screws the whole thing up.
Any ideas on what might be happening? Thanks!
Related
I need to have a bash script triggered and run, but part of the script requires Apache to restart. This obviously kills the script from continuing. I can't move the restarts in the script to the end
I have tried to run the bash scrip though a php script using shell_exec() in a GNU screen session to keep it going but that doesn't work. as soon as Apache goes down the script stops.
There has to be a way to do this but I'm not seeing it.
How I can accomplish this?
Does nohup do the job?
nohup is a POSIX command which means "no hang up". Its purpose is to execute a command such that it ignores the HUP (hangup) signal and therefore does not stop when the user logs out.
Output that would normally go to the terminal goes to a file called nohup.out, if it has not already been redirected.
https://en.wikipedia.org/wiki/Nohup
I am testing a bash script I hope to run as a cron job to scan a download log and perform labor-intensive conversions on image files. In order to run several conversions at once, the first script loops through the download log and sends the filename to the second script, which I set to run as a background process using &.
The script pair works well, but when the process is complete, I must press the enter key to return to a command prompt. This is a non-issue when I am running a test, but I am not sure if this behavior has ramifications when run as a cron job.
Will this be an issue? If so, is there a way to close the "terminal" running the first script from the crontab?
Here's a truncated form of my code:
Script 1 (to be launched by crontab):
for i in file1 file2 file3 etc
do
bash /path/to/convert.sh $i &
done
exit 0
Script 2 (convert.sh)
fileName=${1?no file given}
jpegName=$(echo $fileName | sed s/tif/jpg/g)
convert $fileName $jpegName
exit 0
Thanks for any help/assurances you can give!
you don't need script 2. you can convert it to function and put it inside script1.
Another problem is you are running convert.sh in an uncontrolled way. You cannot foresee how many processes will be created (background processes) and this may lead to severe performance overheads.
finally, if you cannot end process in normal way, you may choose to terminate it again using cron by issueing pkill script1.sh
I'm trying to schedule a script to run on windows. The triggering part works fine. The important part of my script looks like:
start C:\staging-script -arg1 arg -arg2 arg & ECHO "Did staging"
start C:\prod-script -arg1 arg -arg2 arg & ECHO "Did prod"
When I run it from cmd.exe, two more cmd windows are opened, both execute the script, and then the windows don't close. When I try to use Windows scheduler for this, it fails because the "resource is still in use"
Additionally, the ECHOs happen in the original window (which is where they should happen) but happen right away, not when the start task completes.
start creates an independent process. Once the process is started, the message is produced and the next line executed.
If you want the two started processes to execute in parallel and you're only bothered by those processes' windows' not closing, insert
exit
in the scripts started
If you want to execute the processes serially, that is complete process1 before producing the message and starting process2, then CALL the batches, don't start them.
try adding exit to the end of each script the windows execute.
Ive got a script that takes a quite a long time to run, as it has to handle many thousands of files. I want to make this script as fool proof as possible. To this end, I want to check if the user ran the script using nohup and '&'. E.x.
me#myHost:/home/me/bin $ nohup doAlotOfStuff.sh &. I want to make 100% sure the script was run with nohup and '&', because its a very painful recovery process if the script dies in the middle for whatever reason.
How can I check those two key paramaters inside the script itself? and if they are missing, how can I stop the script before it gets any farther, and complain to the user that they ran the script wrong? Better yet, is there way I can force the script to run in nohup &?
Edit: the server enviornment is AIX 7.1
The ps utility can get the process state. The process state code will contain the character + when running in foreground. Absence of + means code is running in background.
However, it will be hard to tell whether the background script was invoked using nohup. It's also almost impossible to rely on the presence of nohup.out as output can be redirected by user elsewhere at will.
There are 2 ways to accomplish what you want to do. Either bail out and warn the user or automatically restart the script in background.
#!/bin/bash
local mypid=$$
if [[ $(ps -o stat= -p $mypid) =~ "+" ]]; then
echo Running in foreground.
exec nohup $0 "$#" &
exit
fi
# the rest of the script
...
In this code, if the process has a state code +, it will print a warning then restart the process in background. If the process was started in the background, it will just proceed to the rest of the code.
If you prefer to bailout and just warn the user, you can remove the exec line. Note that the exit is not needed after exec. I left it there just in case you choose to remove the exec line.
One good way to find if a script is logging to nohup, is to first check that the nohup.out exists, and then to echo to it and ensure that you can read it there. For example:
echo "complextag"
if ( $(cat nohup.out | grep "complextag" ) != "complextag" );then
# various commands complaining to the user, then exiting
fi
This works because if the script's stdout is going to nohup.out, where they should be going (or whatever out file you specified), then when you echo that phrase, it should be appended to the file nohup.out. If it doesn't appear there, then the script was nut run using nohup and you can scold them, perhaps by using a wall command on a temporary broadcast file. (if you want me to elaborate on that I can).
As for being run in the background, if it's not running you should know by checking nohup.
I want to know how to make a shell script wait till other script finishes its execution with out the help of sleep command.
suppose i have scripts run.sh and kill.sh, where run.sh will make all the processes up(means to start running the image on the box) whereas kill.sh contains just the kill commands to kill all the running processes.
Whenever i have run the run.sh, it will make all the processes up and it will end. Then what happens here is all the running processes becoming orphan(handled by init). Whenever we run kill.sh, some of the processes are becoming zombies.
Means, Orphan processes becoming zombies.
To avoid this, i want to make the run.sh wait till the end of kill.sh script.
So, How to make a shell script wait for another script ? Please provide the comments.
Thanks in Advance
You can use wait to let the first script finish without giving an explicit sleep.
#!/bin/bash
./first_script.sh
wait
./second_script.sh