Execute simultaneous commands and quit when one finishes - bash

I hope that question hasn't been asked too many times but I couldn't find an answer on google (I didn't know how to specify it).
Does someone know how to execute two parallel commands in bash but, when one finishes, I would like the other to finish?
For instance, I have two different python scripts :
while 1: pass: loop.py
print(42): print.py
I would like to do something like python3 loop.py ** python3 print.py. The two scripts must run in parallel and when print finishes, loop would end automatically.
My usage of that command would be to make something like:
tcpdump -i any -w out.trace ** python3 network_script.py
Thank you in advance

What you want is
tcpdump ... & pid=$!
python3 network_script.py
kill $pid
Run the first script in the background, then start the second script. When the second script ends, kill the first one.

Not the cleanest solution, but you can start processes in the background with a trailing & and then wait for them to complete.
Each of those processes would have to kill all others upon completion.
It could look like this:
(python loop.py && killall Python) &
(python print.py && killall Python) &
wait
echo "Done with at least one!"

Related

Bash script is waiting to open second file in gedit until I close the first one [duplicate]

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

Parallellize bash scripts and conditional interruption

I've seen many questions about parallelizing bash scripts but so far I haven't found one that answer my questions.
I have a bash script that runs two python scripts sequentially (the fact that are python script is not important though, it could be any other bash job):
python script_1.py
python script_2.py
Now, assume that script_1.py takes a certain (unknown) time to finish, while script_2.py has an infinite loop in it.
I'd like to run the two scripts in parallel, and when script_1.py finishes the execution I'd like to kill script_2.py as well.
Note that I'm not interested in doing this within the python scripts, but I'm interested to do this from a bash point of view.
What I thought was to create 2 "sub" bash scripts: bash_1.sh and bash_2.sh, and to run them in parallel from a main_bash.sh script that looks like:
bash_1.sh & bash_2.sh
where each bash_i.sh job runs a script_i.py script.
However, this wouldn't terminate the second infinite loop once the first one is done.
Is there a way of doing this, adding some sort of condition that kills one script when the other one is done?
As an additional (less important) point, I'd be interested in monitoring the terminal output
of the first script, but not of the second one.
If your scripts need to start in that sequence, you could wait for the bash_1 to finish:
bash_1 &
b1=$!
bash_2 &
b2=$!
wait $b1
kill $b2
It's simpler than you think. When bash_2.sh finishes, just kill bash_1.sh. The trick is getting the process id that kill will need to do this.
bash_2.sh &
b2_pid=$!
bash_1.sh
kill $b2_pid
You can also use job control, if enabled.
bash_2.sh &
bash_1.sh
kill %%
Note that you don't need bash script for this; you can run your Python scripts directly in the same fashion:
python script_2.py &
python script_1.py
kill %%

Automatically terminate all nodes after calling roslaunch

I am trying to run several roslaunch files, one after the other, from a bash script. However, when the nodes complete execution, they hang with the message:
[grem_node-1] process has finished cleanly
log file: /home/user/.ros/log/956b5e54-75f5-11e9-94f8-a08cfdc04927/grem_node-1*.log
Then I need to Ctrl-C to get killing on exit for all of the nodes launched from the launch file. Is there some way of causing nodes to automatically kill themselves on exit? Because at the moment I need to Ctrl-C every time a node terminates.
My bash script looks like this, by the way:
python /home/user/git/segmentation_plots/scripts/generate_grem_launch.py /home/user/Data2/Coco 0 /home/user/git/Async_CNN/config.txt
source ~/setupgremsim.sh
roslaunch grem_ros grem.launch config:=/home/user/git/Async_CNN/config.txt
source /home/user/catkin_ws/devel/setup.bash
roslaunch rpg_async_cnn_generator conf_coco.launch
The script setupgremsim.sh sources another catkin workspace.
Many thanks!
Thanks all for your advice. What I ended up doing was this; I launched my ROS Nodes from separate python scripts, which I then called from the bash script. In python you are able to terminate child processes with shutdown. So to provide an example for anyone else with this issue:
bash script:
#!/bin/bash
for i in {0..100}
do
echo "========================================================\n"
echo "This is the $i th run\n"
echo "========================================================\n"
source /home/timo/catkin_ws/devel/setup.bash
python planar_launch_generator.py
done
and then inside planar_launch_generator.py:
import roslaunch
import rospy
process_generate_running = True
class ProcessListener(roslaunch.pmon.ProcessListener):
global process_generate_running
def process_died(self, name, exit_code):
global process_generate_running
process_generate_running = False
rospy.logwarn("%s died with code %s", name, exit_code)
def init_launch(launchfile, process_listener):
uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
roslaunch.configure_logging(uuid)
launch = roslaunch.parent.ROSLaunchParent(
uuid,
[launchfile],
process_listeners=[process_listener],
)
return launch
rospy.init_node("async_cnn_generator")
launch_file = "/home/user/catkin_ws/src/async_cnn_generator/launch/conf_coco.launch"
launch = init_launch(launch_file, ProcessListener())
launch.start()
while process_generate_running:
rospy.sleep(0.05)
launch.shutdown()
Using this method you could source any number of different catkin workspaces and launch any number of launchfiles.
Try to do this
(1) For each launch you put in a separate shell script. So you have N script
In each script, call the launch file in xterm. xterm -e "roslaunch yourfacnylauncher"
(2) Prepare a master script which calling all N child script in the sequence you want it to be and delay you want it to have.
Once it is done, xterm should kill itself.
Edit. You can manually kill one if you know its gonna hang. Eg below
#!/bin/sh
source /opt/ros/kinetic/setup.bash
source ~/catkin_ws/devel/setup.bash
start ROScore using systemd or rc.local using lxtermal or other terminals to avoid accident kill. Then run the part which you think gonna hang or create a problem. Echo->action if necessary
xterm -geometry 80x36+0+0 -e "echo 'uav' | sudo -S dnsmasq -C /dev/null -kd -F 10.5.5.50,10.5.5.100 -i enp59s0 --bind-dynamic" & sleep 15
Stupid OUSTER LIDAR cant auto config like Veloydne and will hang here. other code cant run
killall xterm & sleep 1
Lets just kill it and continuous run other launches
xterm -e "roslaunch '/home/uav/catkin_ws/src/ouster_driver_1.12.0/ouster_ros/os1.launch' os1_hostname:=os1-991907000715.local os1_udp_dest:=10.5.5.1"

How to run a command after a certain time while another command is running?

I have a bash script which will be running a main command, let's say for one hour. I would like to execute another command after a certain time since the main command has been started (at t_x). Something like this:
main starts -------> main ends
|
|
at time t_x, second command is executed
At the moment I have something like this:
mpirun main_command & sleep 1m && second_command
and the problem is that after second command is executed, the main command is killed. Can anyone help me? Thanks!
The first command is failing to lock the console, as another process is also using it. You will need to redirect the standard io pipelines, 0<&- mpirun main_command >/dev/null 2>/dev/null If this still does not work, use shell -c 'mpirun main_command' & sleep 1m;second_command You can use ; instead of &&, unless you need a failing exit code when someone interrupts the sleep.

Is it possible for bash commands to continue before the result of the previous command?

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

Resources