I have a bunch of bash scripts that I run sequentially. I'm going to consolidate to a single script but there's one part that's a bit tricky. Specifically, script C launches a Google Compute Engine job and I only want script D (the one immediately following it) to execute once that's done.
Is there a good way of doing this?
In case it helps, my new script would be:
source script_A.sh
source script_B.sh
source script_C.sh
**wait until cloud job has finished**
source script_D.sh
Thanks!
After gcloud ... & is called, use gcloudpid=$! (I don't think you have to export, but it wouldn't hurt) to grab its pid. then your main script will be
source script_C.sh
wait $gcloudpid
source script_D.sh
Related
I have a UniVerse (Rocket U2) system, and want to be able to call certain UniVerse/TCL commands from a shell script. However whenever I run the uv binary it seems to stop the execution of the rest of the shell script.
For Example if I run:
/u2/uv/bin/uv
It starts a UniVerse session. The next line of the script (RUNPY run_tests.py) is meant to be executed in the TCL environment, but is never input to TCL. I have tried passing in string parameters to the uv binary to be executed, but doesn't appear to do anything.
Is there a way to call UniVerse/TCL commands from a UNIX/Shell environment?
You can type this manually or put it into a shell script. I have not run into any issues with this paradigm, but your choice of shell could theoretically affect this. You certainly want to either be in the directory of the account you want execute it in or cd to it in the script.
/u2/uv/bin/uv <<start
RUNPY run_tests.py
start
Good Luck.
One thing to watch out for is if you have a LOGIN paragraph or something else that runs automatically to start your application (which is really common), then you need to find a way to bypass this for non-interactive users.
https://groups.google.com/forum/#!topic/comp.databases.pick/B2hzuXq3X9A mentions
IF OCONV(#TTY,'MCU')='PHANTOM' THEN ABORT
In UD, I kick off scripts from unix as a phantom to a) capture the log output in PH and b) end the process if extra input is requested, rather than hanging around. In UD that's
$echo "PHANTOM COUNT VOC" | udt
UniData Release 8.1 Build: (2008)
Current UniData home is /unidata/ud81/.
Current working directory is /usr/ud81/demo
:PHANTOM COUNT VOC
PHANTOM process 18743448 started.
COMO file is '_PH_/dsiroot45172_18743448'.
:
Critical abort condition found.
$cat _PH_/dsiroot45172_18743448
COUNT VOC
14670 record(s) counted.
PHANTOM process 18743448 has completed.
Van Amburg's answer is the most correct for handling multiple lines of input. The variant I used was instead of the << command for multi-line strings I just added quotes around a single command (single and double quotes both work):
/u2/uv/bin/uv "RUNPY run_tests.py"
I am trying to write a wrapper script that calls other shell scripts in a sequential manner
There are 3 shell scripts that pick .csv files of a particular pattern from a specified location and process them.
I need to run them sequentially by calling them from one wrapper script
Let's consider 3 scripts
a.ksh, b.ksh and c.ksh that run sequentially in the same order.
The requirement is that the script should fail if a.ksh fails but continue if b.sh fails.
Please suggest.
Thanks in advance!
Something like:
./a.ksh && ./b.ksh; ./c.ksh
I haven't tried this out. Do test with sample scripts that fail/pass before using.
See: http://www.gnu.org/software/bash/manual/bashref.html#Lists
I would like to get insight on how to get started or what general direction to look in when trying to make a script or makefile that will run 3 make commands at once that take in the same input. These three commands all ask for the same input but just output different excel files due to it manipulating the pulled data in different ways. Therefore If I were able to create a script or makefile that ran all three commands at once when giving the input one time it would SAVE ME A TON OF TIME.
This is all being done in putty pretty much (in terms of the commands)
Thanks,
NP
You want to use a shell script.
For instance, you can create run.sh with:
#!/bin/bash
make FLAG1=ON $*
make FLAG2=ON $*
make FLAG3=ON $*
Make it executable and do `./run.sh MYCOMMOFLAG1=ON MYCOMMONFLAG2=OFF...
I've been trying to generate and infinite loop in OpenWRT, and I've succeeded:
#!/bin/sh /etc/rc.common
while [ true ]
do
# Code to run
sleep 15
done
This code works as a charm if I execute it as ./script. However, I want this to start on its own when I turn on my router. I've placed the script in /etc/init.dand enabled it with chmod +x script.
Regardless, the program doesn't start running at all. My guess is that I shouldn't execute this script on boot up but have a script that calls this other script. I haven't been able to work this out.
Any help would be appreciated.
As I have messed with init scripts of OpenWRT in my previous projects. I would like contribute to Rich Alloway's answer (for the ones who will likely to drop here from google search). His answer only covers for "traditional SysV style init scripts" as it is mentioned in the page that he gave link Init Scripts.
There is new process management daemon, Procd that you might find in your OpenWRT version. Sadly documentation of it has not been completed yet; Procd Init Scripts.
There are minor differences like they have pointed out in their documentation :
procd expects services to run in the foreground,
Different shebang,
line: #!/bin/sh /etc/rc.common Explicitly use procd USE_PROCD=1
start_service() instead of start()
A simple init script for procd would look like :
#!/bin/sh /etc/rc.common
# it is run order of your script, make it high to not mess up with other init scripts
START=100
USE_PROCD=1
start_service() {
procd_open_instance
procd_set_param command /target/to/your/useless/command -some -useless -shit -here
}
I have posted some blog post about it while ago that might help.
You need to have a file in /etc/rc.d/ with an Sxx prefix in order for the system to execute the script at boot time. This is usually accomplished by having the script in /etc/init.d and a symlink in /etc/rc.d pointing to the script.
The S indicates that the script should run at startup while the xx dictates the order that the script will run. Scripts are executed in naturally increasing order: S10boot runs before S40network and S50cron runs before S50dropbear.
Keep in mind that the system may not continue to boot with the script that you have shown here!
/etc/init.d/rcS calls each script sequentially and waits for the current one to exit before calling the next script. Since your script is an infinite loop, it will never exit and rcS may not complete the boot process.
Including /etc/rc.common will be more useful if you use functions in your script like start(), stop(), restart(), etc and add START and STOP variables which describe when the script should be executed during boot/shutdown.
Your script can then be used to enable and disable itself at boot time by creating or removing the symlink: /etc/init.d/myscript enable
See also OpenWRT Boot Process and Init Scripts
-Rich Alloway (RogueWave)
This is my situation:
I want to run Python scripts sequentially in sequence, starting with scriptA.py. When scriptA.py finishes, scriptB.py should run, followed by scriptC.py. After these scripts have run in order, I need to run an rsync command.
I plan to create bash script like this:
#!/bin/sh
python scriptA.py
python scriptB.py
python scriptC.py
rsync blablabla
Is this the best solution for perfomance and stability ?
To run a command only after the previous command has completed successfully, you can use a logical AND:
python scriptA.py && python scriptB.py && python scriptC.py && rsync blablabla
Because the whole statement will be true only if all are true, bash "short-circuits" and only starts the next statement when the preceding one has completed successfully; if one fails, it stops and doesn't start the next command.
Is that the behavior you're looking for?
If you have some experience with python it will almost certainly be better to write a python script that imports and executes the relevant functions from the other script. That way you will be able to use pythons exceptions handling. Also you can run the rsync from within python.