Start background process with ssh, run experiments script, then stop it - bash

I am running client-server performance experiments on several remote machines. I am trying to write a script to automate the experiments. Here is how it looks like (in a simplified way) for the moment.
for t in 0 1 2 3 4 5 6 7 8 9; do
cmd1="ssh user#${client1} runclient --threads=${t}"
cmd2="ssh user#${client2} runclient --threads=${t}"
$cmd1 &
$cmd2 &
wait
runclient connects to a server that I have started manually. It works fine, but I would like to automate starting and stopping the server as well. That means
Start the server in the background at the beginning of experiments
Run all the experiments
Stop the server at the end of experiments
I have found several suggestions but I am not sure which one is good for me exactly. Some recommend nohup, but I am not sure how to use it, and I don't understand why I should redirect stdin, stdout, and stderr. There is also the maybe the "-f" option to ssh to start a background process. In that case, how can I stop it later?
Edit: in response to the comments, the server is part of the performance experiments. I start it in a similar way to the client.
ssh user#${server} runserver
The only difference is that I want to start the server once, run several experiments on the clients with different parameters, and then stop the server. I could try something like that
ssh user#${server} runserver &
for t in 0 1 2 3 4 5 6 7 8 9; do
cmd1="ssh user#${client1} runclient --threads=${t}"
cmd2="ssh user#${client2} runclient --threads=${t}"
$cmd1 &
$cmd2 &
wait
But as the server does not stop, the script would never go past the first wait

Track your PIDs and wait for them individually.
This also lets you track failures, as shown below:
ssh "user#${server}" runserver & main_pid=$!
for t in 0 1 2 3 4 5 6 7 8 9; do
ssh "user#${client1}" "runclient --threads=${t}" & client1_pid=$!
ssh "user#${client2}" "runclient --threads=${t}" & client2_pid=$!
wait "$client1_pid" || echo "ERROR: $client1 exit status $? when run with $t threads"
wait "$client2_pid" || echo "ERROR: $client2 exit status $? when run with $t threads"
done
kill "$main_pid"

Related

Bash - kill a command after a certain time [duplicate]

This question already has answers here:
Timeout a command in bash without unnecessary delay
(24 answers)
Closed 1 year ago.
In my bash script I run a command that activates a script. I repeat this command many times in a for loop and as such want to wait until the script is finished before running it again. My bash script is as follows
for k in $(seq 1 5)
do
sed_param='s/mu = .*/mu = '${mu}';/'
sed -i "$sed_param" brusselator.c
make brusselator.tst &
done
As far as I know the & at the end lets the script know to wait until the command is finished, but this isn't working. Is there some other way?
Furthermore, sometimes the command can take very very long, in this case I would maximally want to wait 5 seconds. But if the command is done earlier I would't want to wait 5 seconds. Is there some way to achieve this?
There is the timeout command. You would use it like
timeout -k 5 make brusselator.tst
Maybe you would like to see also if it exited successfully, failed or was killed because it timed out.
timeout -k 5 make brusselator.tst && echo OK || echo Failed, status $?
If the command times out, and --preserve-status is not set, then command exits with status 124. Different status would mean that make failed for different reason before timing out.

How to run 2 commands on bash concurrently

I want to test my server program,(let's call it A) i just made. So when A get executed by this command
$VALGRIND ./test/server_tests 2 >>./test/test.log
,it is blocked to listen for connection.After that, i want to connect to the server in A using
nc 127.0.0.1 1234 < ./test/server_file.txt
so A can be unblocked and continue. The problem is i have to manually type these commands in two different terminals, since both of them block. I have not figured out a way to automated this in a single shell script. Any help would be appreciated.
You can use & to run the process in the background and continue using the same shell.
$VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
If you want the server to continue running even after you close the terminal, you can use nohup:
nohup $VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
For further reference: https://www.computerhope.com/unix/unohup.htm
From the question, it looks if the goal is to build a test script for the server, that will also capture memory check.
For the specific case of building a test script, it make sense to extend the referenced question in the comment, and add some commands to make it unlikely for the test script to hang. The script will cap the time for executing the client, executing the server, and if the test complete ahead of the time, it will attempt to shutdown the server.
# Put the server to the background
(timeout 15 $VALGRIND ./test/server_tests 2 >>./test/test.log0 &
svc_pid=$!
# run the test cilent
timeout 5 nc 127.0.0.1 1234 < ./test/server_file.txt
.. Additional tests here
# Terminate the server, if still running. May use other commands/signals, based on server.
kill -0 $svc_id && kill $svc_pid
wait $svc_pid
# Check log file for error
...

How to prevent background command going to suspend immediately [duplicate]

This question already has answers here:
running ssh process in background gets paused
(2 answers)
Closed 4 years ago.
Imagine this script (do not consider auth and other stuff, all SSH commands run just fine without &)
(ssh foo.com "/bin/sleep 5 && echo 1") &
(ssh bar.com "/bin/sleep 5 && echo 1") &
wait
echo "My commands finished"
Now, I would expect all my SSH commands to run immediately as background jobs and then, when finished, I would get the final "My commands finished" message.
But that's not what happens...
What actually happens is this
[1] 16155
[1] + 16155 suspended (tty input) ssh foo.com "/bin/sleep 5 && echo 1"
[2] 16156
[2] + 16156 suspended (tty input) ssh bar.com "/bin/sleep 5 && echo 1"
My commands finished
So all the background commands go immediatelly to suspended state where they stay forever. Sure I can bring them back with fg or kill -CONT PID but that's all sequential. I need to run all my commands in parallel and just wait for all of them to finish.
Do you know why is that and how to avoid the suspended state?
Thanks to n.m.. The solution is to pass < /dev/null to stdin. By doing that the subprocess closes the stdin and does not block console IO as in the previous case which led to suspend state.

Shell script: How to loop run two programs?

I'm running an Ubuntu server to mine crypto. It's not a very stable coin yet and their main node gets disconnected sometimes. When this happens it crashes the program through fatal error.
At first I wrote a loop script so it would keep running after a crash and just try again after 15 seconds:
while true;
do ./miner <somecodetoconfiguretheminer> &&break;
sleep 15
done;
This works, but is inefficient. Sometimes the loop will keep running for 30 minutes until the main node is back up - which costs me 30 minutes of hashing power unused. So I want it to run a second miner for 15 minutes to mine another coin, then check the first miner again if its working yet.
So basically: Start -> Mine coin 1 -> if crash -> Mine coin 2 for 15 minutes -> go to Start
I tried the script below but the server just becomes unresponsive once the first miner disconnects:
while true;
do ./miner1 <somecodetoconfiguretheminer> &&break;
timeout 900 ./miner2
sleep 15
done;
Ive read through several topics / questions on how &&break works, timeout works and how while true works but I can't figure out what I'm missing here.
Thanks in advance for the help!
A much simpler solution would be to run both of the programs all the time, and lower the priority of the less-preferred one. On Linux and similar systems, that is:
nice -10 ./miner2loop.sh &
./miner1loop.sh
Then the scripts can be similar to your first one.
Okay, so after trial and error - and some help - I found out that there is nothing wrong with my initial code. Timeout appears to behave differently on my linux instance when used in terminal than in a bash script. If used in Terminal it behaves as it should, it counts down and then kills the process it started. If used in bash however - it acts as if I typed 'sleep' and then after counting down stops.
Apparently this has to do with my Ubuntu instance (running on a VPS). Even though I installed latest versions of coreutils, have all the latest versions installed through apt-get update etc. This is the case for me on Digital Ocean as well as Google Compute.
The solution is to use the Timeout code as a function within the bash script, as found on another thread in stackoverflow. I named the function timeout2 as to not confuse the system in triggering the not properly working timeout command:
#!/bin/bash
# Executes command with a timeout
# Params:
# $1 timeout in seconds
# $2 command
# Returns 1 if timed out 0 otherwise
timeout2() {
time=$1
# start the command in a subshell to avoid problem with pipes
# (spawn accepts one command)
command="/bin/sh -c "$2""
expect -c "set echo "-noecho"; set timeout $time; spawn -noecho
$command; expect timeout { exit 1 } eof { exit 0 }"
if [ $? = 1 ] ; then
echo "Timeout after ${time} seconds"
fi
}
while true;
do
./miner1 <parameters for miner> && break;
sleep 5
timeout2 300 ./miner2 <parameters for miner>
done;

Run Multiple Shell Scripts From One Shell Script

Heres what I'm trying to do. I have 4 shell scripts. Script 1 needs to be run first, then 2, then 3, then 4, and they must be run in that order. Script 1 needs to be running (and waiting in the background) for 2 to function properly, however 1 takes about 3 seconds to get ready for use. I tried doing ./1.sh & ./2.sh & ./3.sh & ./4.sh, but this results in a total mess,since 2 starts requesting things from 1 when 1 is not ready yet. So, my question is, from one shell script, how do I get it to start script 1, wait like 5 seconds, start script 2, wait like 5 seconds, etc. without stopping any previous scripts from running (i.e. they all have to be running in the background for any higher numbered script to work). Any suggestions would be much appreciated!
May I introduce you to the sleep command?
./1.sh & sleep 5
./2.sh & sleep 5
./3.sh & sleep 5
./4.sh
#!/bin/sh
./1.sh &; sleep 5;./2.sh &; sleep 5; ./3.sh &; sleep 5; ./4.sh

Resources