How to stop a command after a certain time in bash - bash

In bash, I'm running iftop tool with text output. It's a tool that is monitoring network and writting the results periodicaly in the standard output (by default the terminal but could be something else, a file). It's running in foreground until I stop it with ctrl + C
I would like to make it running for a certain period of time (1 minute for example), and then stop the process automatically
How can I make this in bash ?
Here is what I already tried:
sudo iftop -nNPt -L 100 -i wlp0s20f3 > iftop.out & pid=$!; sleep 20; sudo kill $pid
But, 1) it does not kill the process and 2) it does not redirect to the file

Using the timeout command in Linux should work, the syntax is below,
timeout [OPTION] DURATION COMMAND [ARG]...

I guess you can try putting everything after & in the next line. For this, you'll have to put the whole commands in a function in your .bashrc or in a script :
sudo iftop -nNPt -L 100 -i wlp0s20f3 > iftop.out &
pid=$!; sleep 20; sudo kill $pid

Related

ROS/Linux: What exactly does '&' in the linux terminal?

I am working with ROS. And for starting ros-packages you need to have the ROS Master run in the background. Now when I want to start the ROS-package rviz, instead of opening two terminals:
Terminal1:
$ roscore
Terminal2:
$ rviz
I can do the follwing in one Terminal:
$ roscore& rviz
But what exactly is happening here? Because when I end that terminal with Str+C it only closes rivz, but roscore is kept running in the background? Why and how can I close it?
in case using single & the left side will run in the background, while the right side will run normally in the terminal.
Now to close the first process you need to find PID (Process ID) and do the termination command, so first of all you need to find PID and you can use pgrep (in your case PROCESS_NAME can be roscore):
pgrep -f PROCESS_NAME
Now to kill the process you can easily do:
kill -9 PID_HERE
Or you can do it by single command:
pgrep -f PROCESS_NAME | xargs kill -9

Run multiple commands simultaneously in bash in one line

I am looking for an alternative to something like ssh user#node1 uptime && ssh user#node2 uptime, where both of the SSH-commands are run simultaneosly. As they are both blocking until the command returns, && and ; between them don't work.
My goal is to run infinite while loops on both nodes via SSH. So the first one would never return, and the second one would never be run. I would then like to save the output after terminating the loops with Ctrl+C to a log-file and read that one via Python.
Is there an easy solution to this?
Thanks in advance!
Capturing SSH output
On the one hand, you need to capture the ssh output/error and store it into a file so that you can process it afterwards with Python. To this purpose you can:
1- Store output and error directly into a file
ssh user#node cmd 2>&1 > session.log
2- Show output/error in the console while storing it into a file (I would recommend this one)
ssh user#node cmd 2>&1 | tee session.log
Check this for further information about the tee command.
Running commands in parallel
On the other hand, you want to run both commands in parallel and block the current bash process. You can achieve this by:
1- Blocking the current bash process until their childs are done.
cmd1 & ; cmd2 & ; wait
Check this for further information about the wait command.
2- Spawning the child processes and freeing the current bash process. Notice that the processes will be kept alive although the main process ends.
nohup cmd & ; nohup cmd &
The whole thing
I would recommend combining both approaches using tee (so you can still see the ssh outputs on your terminal) and blocking the current process until everything is done (so that when you kill the main process all the processes are killed too).
ssh user#node1 uptime 2>&1 | tee session1.log & ; ssh user#node2 uptime 2>&1 | tee session2.log & ; wait

shell script running infinitely

i have a shell script run.sh.
cd elasticsearch-1.1.0/
./bin/elasticsearch
cd
cd RBlogs/DataFetcher/
mvn clean install assembly:single;
cd target/
java -jar DataFetcher-0.0.1-SNAPSHOT-jar-with-dependencies.jar
Here if second line(./bin/elasticsearch) executes it runs infinite time, so the next lines will not execute. So what i need is to perform the next lines after 10 seconds. But
cd elasticsearch-1.1.0/
./bin/elasticsearch
sleep 10
cd
cd RBlogs/DataFetcher/
mvn clean install assembly:single;
cd target/
java -jar DataFetcher-0.0.1-SNAPSHOT-jar-with-dependencies.jar
This also will not execute next lines because ./bin/elasticsearch will not complete its execution in 10seconds. So how can i solve this problem? Please help.
Adding & at the end of ./bin/elasticsearch will cause the process to run in a subshell, freeing the current shell up for the next commands.
./bin/elasticsearch &
Change this in your second version of the script and things should run like you want them too.
More information can be found from man bash
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell.
The shell does not wait for the command to finish, and the return status is 0.
you may try to put it in background
& can do this for you
./bin/elasticsearch &
If you simply want elasticsearch to run in the background while the rest of the script progresses, just use &:
cd elasticsearch-1.1.0/
./bin/elasticsearch &
sleep 10
cd
cd RBlogs/DataFetcher/
However, if you want to run elasticsearch for at most 10 seconds, killing it if necessary, then proceeding with the rest of the script, you need something a little more complicated:
cd elasticsearch-1.1.0/
./bin/elasticsearch &
pid=$!
sleep 10
kill -0 $pid && kill $pid
cd
cd RBlogs/DataFetcher/
As other answers mentioned, you can
use ./bin/elasticsearch & to run the command in
background.
record the process id of the
command run in background by using child_pid=$! and then stop the
process by using kill $child_pid after some time to implement a
timeout mechanism.
Meanwhile, you also can synchronize another operation with the command run in background by using wait command. An example bellow:
./bin/elasticsearch &
# do something asynchronously here
wait # wait for accomplishment of ./bin/elasticsearch
# do something synchronously here

Running processes simultaneously, Bash

I would like to run n processes (in my case simulations) simultaneously, using bash.
Right now this is what I'm running:
for file in $ini/SAN*.ini;
do
echo "Running $file...";
temp=$(basename $file .ini)
mosrun -G opp_run -r 0 -u Cmdenv -n ..:../../src -l ../../src/inet SAN.ini > $outputs/$temp.out;
done
Problem is, the loop only progresses to the next iteration after the simulation is done. Any suggestions? Thanks!
You should be able to run your command in the background by adding a & after it.
Should make them run in parallell, although in the background.
(Small side note: the processes will continue to run even if you abort the script, so you might want to add a trap to kill the processes if you hit for eg. ctrl-c when script is running. Look at bash manual.)

Kill process in bash that runs more than specified time?

I have a shutdown script for Oracle in /etc/init.d dir
on "stop" command it does:
su oracle -c "lsnrctl stop >/dev/null"
su oracle -c "sqlplus sys/passwd as sysdba #/usr/local/PLATEX/scripts/orastop.sql >/dev/null"
..
The problem is when lsnrctl or sqlplus are unresponsive - in this case this "stop" script just never ends and server cant shutdown. The only way - is to "kill - 9 " that.
I'd like to rewrite script so that after 5min (for example) if command is not finished - it should be terminated.
How I can achieve this? Could you give me an example?
I'm under Linux RHEL 5.1 + bash.
If able to use 3rd-party tools, I'd leverage one of the 3rd-party, pre-written helpers you can call from your script (doalarm and timeout are both mentioned by the BashFAQ entry on the subject).
If writing such a thing myself without using such tools, I'd probably do something like the following:
function try_proper_shutdown() {
su oracle -c "lsnrctl stop >/dev/null"
su oracle -c "sqlplus sys/passwd as sysdba #/usr/local/PLATEX/scripts/orastop.sql >/dev/null"
}
function resort_to_harsh_shutdown() {
for progname in ora_this ora_that ; do
killall -9 $progname
done
# also need to do a bunch of cleanup with ipcs/ipcrm here
}
# here's where we start the proper shutdown approach in the background
try_proper_shutdown &
child_pid=$!
# rather than keeping a counter, we check against the actual clock each cycle
# this prevents the script from running too long if it gets delayed somewhere
# other than sleep (or if the sleep commands don't actually sleep only the
# requested time -- they don't guarantee that they will).
end_time=$(( $(date '+%s') + (60 * 5) ))
while (( $(date '+%s') < end_time )); do
if kill -0 $child_pid 2>/dev/null; then
exit 0
fi
sleep 1
done
# okay, we timed out; stop the background process that's trying to shut down nicely
# (note that alone, this won't necessarily kill its children, just the subshell we
# forked off) and then make things happen.
kill $child_pid
resort_to_harsh_shutdown
wow, that's a complex solution. here's something easier. You can track the PID and kill it later.
my command & #where my command is the command you want to run and the & sign backgrounds it.
PID=$! #PID = last run command.
sleep 120 && doProperShutdown || kill $PID #sleep for 120 seconds and kill the process properly, if that fails, then kill it manually.. this can be backgrounded too.

Resources