shell script running infinitely - shell

i have a shell script run.sh.
cd elasticsearch-1.1.0/
./bin/elasticsearch
cd
cd RBlogs/DataFetcher/
mvn clean install assembly:single;
cd target/
java -jar DataFetcher-0.0.1-SNAPSHOT-jar-with-dependencies.jar
Here if second line(./bin/elasticsearch) executes it runs infinite time, so the next lines will not execute. So what i need is to perform the next lines after 10 seconds. But
cd elasticsearch-1.1.0/
./bin/elasticsearch
sleep 10
cd
cd RBlogs/DataFetcher/
mvn clean install assembly:single;
cd target/
java -jar DataFetcher-0.0.1-SNAPSHOT-jar-with-dependencies.jar
This also will not execute next lines because ./bin/elasticsearch will not complete its execution in 10seconds. So how can i solve this problem? Please help.

Adding & at the end of ./bin/elasticsearch will cause the process to run in a subshell, freeing the current shell up for the next commands.
./bin/elasticsearch &
Change this in your second version of the script and things should run like you want them too.
More information can be found from man bash
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell.
The shell does not wait for the command to finish, and the return status is 0.

you may try to put it in background
& can do this for you
./bin/elasticsearch &

If you simply want elasticsearch to run in the background while the rest of the script progresses, just use &:
cd elasticsearch-1.1.0/
./bin/elasticsearch &
sleep 10
cd
cd RBlogs/DataFetcher/
However, if you want to run elasticsearch for at most 10 seconds, killing it if necessary, then proceeding with the rest of the script, you need something a little more complicated:
cd elasticsearch-1.1.0/
./bin/elasticsearch &
pid=$!
sleep 10
kill -0 $pid && kill $pid
cd
cd RBlogs/DataFetcher/

As other answers mentioned, you can
use ./bin/elasticsearch & to run the command in
background.
record the process id of the
command run in background by using child_pid=$! and then stop the
process by using kill $child_pid after some time to implement a
timeout mechanism.
Meanwhile, you also can synchronize another operation with the command run in background by using wait command. An example bellow:
./bin/elasticsearch &
# do something asynchronously here
wait # wait for accomplishment of ./bin/elasticsearch
# do something synchronously here

Related

Windows bash script to run something from WSL

I am trying to write down a Windows bash script to:
Start Windows Subsystem for Linux (WSL) and
cd within WSL to "../myfolder/"
run ./foo first_parameter second_parameter
Wait until finished and exit WSL
cd within Windows to "../myWinFolder/"
run Foo.exe parameter
wait until finished
This is my attempt:
bash -c "cd ../myFolder/ && ./foo first_parameter second_parameter"
cd ..
cd myWinFolder
START /WAIT Foo.exe parameter
But sadly CMD does not wait for WSL to finish before running the EXE.
Anything I can do?
I'm happy that the interop between dos and WSL does in fact wait for commands to finish. Try the following:
In a file called runit.bat
echo bat01
bash -c "echo bat02; cd ./bash/; ./runit.sh; echo bat03"
echo bat04
In a sub-folder called ./bash/ paste the following in a file called runit.sh
echo sh01
sleep 2s
echo sh02
When you run runit.bat from within dos you will see a wait of 2 seconds
You have not specified what is inside your ./foo script. I suspect that it is running a task in the background or running something that returns immediately. This can be simulated by putting & after the sleep so that it runs in the background within wsl sleep 2s &. When you do this you see that there is no pause in the execution of the script.
So I would check ./foo maybe add some echo debug statements around inside it and run it from within WSL first to make sure that it does indeed wait until all the commands are finished before it exits.

Automatically terminate all nodes after calling roslaunch

I am trying to run several roslaunch files, one after the other, from a bash script. However, when the nodes complete execution, they hang with the message:
[grem_node-1] process has finished cleanly
log file: /home/user/.ros/log/956b5e54-75f5-11e9-94f8-a08cfdc04927/grem_node-1*.log
Then I need to Ctrl-C to get killing on exit for all of the nodes launched from the launch file. Is there some way of causing nodes to automatically kill themselves on exit? Because at the moment I need to Ctrl-C every time a node terminates.
My bash script looks like this, by the way:
python /home/user/git/segmentation_plots/scripts/generate_grem_launch.py /home/user/Data2/Coco 0 /home/user/git/Async_CNN/config.txt
source ~/setupgremsim.sh
roslaunch grem_ros grem.launch config:=/home/user/git/Async_CNN/config.txt
source /home/user/catkin_ws/devel/setup.bash
roslaunch rpg_async_cnn_generator conf_coco.launch
The script setupgremsim.sh sources another catkin workspace.
Many thanks!
Thanks all for your advice. What I ended up doing was this; I launched my ROS Nodes from separate python scripts, which I then called from the bash script. In python you are able to terminate child processes with shutdown. So to provide an example for anyone else with this issue:
bash script:
#!/bin/bash
for i in {0..100}
do
echo "========================================================\n"
echo "This is the $i th run\n"
echo "========================================================\n"
source /home/timo/catkin_ws/devel/setup.bash
python planar_launch_generator.py
done
and then inside planar_launch_generator.py:
import roslaunch
import rospy
process_generate_running = True
class ProcessListener(roslaunch.pmon.ProcessListener):
global process_generate_running
def process_died(self, name, exit_code):
global process_generate_running
process_generate_running = False
rospy.logwarn("%s died with code %s", name, exit_code)
def init_launch(launchfile, process_listener):
uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
roslaunch.configure_logging(uuid)
launch = roslaunch.parent.ROSLaunchParent(
uuid,
[launchfile],
process_listeners=[process_listener],
)
return launch
rospy.init_node("async_cnn_generator")
launch_file = "/home/user/catkin_ws/src/async_cnn_generator/launch/conf_coco.launch"
launch = init_launch(launch_file, ProcessListener())
launch.start()
while process_generate_running:
rospy.sleep(0.05)
launch.shutdown()
Using this method you could source any number of different catkin workspaces and launch any number of launchfiles.
Try to do this
(1) For each launch you put in a separate shell script. So you have N script
In each script, call the launch file in xterm. xterm -e "roslaunch yourfacnylauncher"
(2) Prepare a master script which calling all N child script in the sequence you want it to be and delay you want it to have.
Once it is done, xterm should kill itself.
Edit. You can manually kill one if you know its gonna hang. Eg below
#!/bin/sh
source /opt/ros/kinetic/setup.bash
source ~/catkin_ws/devel/setup.bash
start ROScore using systemd or rc.local using lxtermal or other terminals to avoid accident kill. Then run the part which you think gonna hang or create a problem. Echo->action if necessary
xterm -geometry 80x36+0+0 -e "echo 'uav' | sudo -S dnsmasq -C /dev/null -kd -F 10.5.5.50,10.5.5.100 -i enp59s0 --bind-dynamic" & sleep 15
Stupid OUSTER LIDAR cant auto config like Veloydne and will hang here. other code cant run
killall xterm & sleep 1
Lets just kill it and continuous run other launches
xterm -e "roslaunch '/home/uav/catkin_ws/src/ouster_driver_1.12.0/ouster_ros/os1.launch' os1_hostname:=os1-991907000715.local os1_udp_dest:=10.5.5.1"

Start jobs in background from sh file in gitbash

I need to start a couple of processes locally in multiple command-prompt windows, to make it simple, I have written a shell script say abc.sh to run in git-bash which has below commands:
cd "<target_dir1>"
<my_command1> &>> output.log &
cd "<target_dir2>"
<my_command2> &>> output.log &
when I run these commands in git bash I get jobs running in the background, which can be seen using jobs and kill command, however when I run them through abc.sh, I get my processes running in the background, but the git-bash instance disowns them, now I can no longer see them using jobs.
how can I get them run through the abc.sh file and also able to see them in jobs list?

Start a process in background, do a task, then kill the process in the background

I have a script that looks like this:
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
cd web/code/protected/tests/
phpunit functional/
popd
The selenium servers needs to be running for the tests, however after the phpunit command finishes I'd like to kill the selenium-server that was running.
How can I do this?
You can probably save the PID of the process in a variable, then use the kill command to kill it.
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
serverPID=$!
cd web/code/protected/tests/
phpunit functional/
kill $serverPID
popd
I haven't tested it myself, I'd like to write it on a comment, but not enough reputation yet :)
When the script is excecuted a new shell instance is created. Which means that the jobs in the new script would not list any jobs running in the parent shell.
Since the selenium-server server is the only background process that is created in the new script it can be killed using
#The first job
kill %1
Or
#The last job Same as the first one
kill %-
As long as you don't launch any other process in the background - which you don't - you can use $! directly:
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
cd web/code/protected/tests/
phpunit functional/
kill $!
popd

Shell script not waiting

ssh user#myserver.com<<EOF
cd ../../my/path/
sh runscript.sh
wait
cd ../../temp/path
sh secondscript.sh
EOF
The first script runs and asks me the questions in that script, but before i'm even able to start typing to answer them the second script starts running. From what I'm reading this shouldn't be happening even without the wait.

Resources