Launch the same program several times with delay using bash - shell

I am writing a bash script to test my multi-connection TCP server. The script is supposed to launch the client several times. Here is what I have done so far:
#!/bin/bash
toport=8601
for ((port = 8600; port < 8610; port++));
do
client 10.xml &
replace $port $toport -- "10.xml" #modifying the port in the xml file
((toport=toport+1))
done
As it is going too fast, most of the clients don't have enough time to connect to the server. So I added sleep 1 in the loop, as follows:
#!/bin/bash
toport=8601
for ((port = 8600; port < 8610; port++));
do
client 10.xml &
replace $port $toport -- "10.xml" #modifying the port in the xml file
((toport=toport+1))
sleep 1
done
But for some reason it gets even worse, since no clients are able to connect to the server anymore. Do you have any idea why?

In your script you are running the client in background and putting the sleep statement at the end of the loop modify it like below or run your client in foreground instead of back ground
client 10.xml &
sleep 3
replace $port $toport -- "10.xml" #modifying the port in the xml file
((toport=toport+1))
#sleep 1

Related

How to run 2 commands on bash concurrently

I want to test my server program,(let's call it A) i just made. So when A get executed by this command
$VALGRIND ./test/server_tests 2 >>./test/test.log
,it is blocked to listen for connection.After that, i want to connect to the server in A using
nc 127.0.0.1 1234 < ./test/server_file.txt
so A can be unblocked and continue. The problem is i have to manually type these commands in two different terminals, since both of them block. I have not figured out a way to automated this in a single shell script. Any help would be appreciated.
You can use & to run the process in the background and continue using the same shell.
$VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
If you want the server to continue running even after you close the terminal, you can use nohup:
nohup $VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
For further reference: https://www.computerhope.com/unix/unohup.htm
From the question, it looks if the goal is to build a test script for the server, that will also capture memory check.
For the specific case of building a test script, it make sense to extend the referenced question in the comment, and add some commands to make it unlikely for the test script to hang. The script will cap the time for executing the client, executing the server, and if the test complete ahead of the time, it will attempt to shutdown the server.
# Put the server to the background
(timeout 15 $VALGRIND ./test/server_tests 2 >>./test/test.log0 &
svc_pid=$!
# run the test cilent
timeout 5 nc 127.0.0.1 1234 < ./test/server_file.txt
.. Additional tests here
# Terminate the server, if still running. May use other commands/signals, based on server.
kill -0 $svc_id && kill $svc_pid
wait $svc_pid
# Check log file for error
...

Keep ssh tunnel open after running script

I have a device with intermittent connectivity that "calls home" to open a reverse tunnel to allow me to SSH into it. This works very reliably started by systemd and automatically restarted on any exit:
ssh -R 1234:localhost:22 -N tunnel-user#reliable.host
Now however I want to run a script on the reliable-host on connect. This is easy enough with a simple change to the ssh ... cli: swap -N for a script name on the remote reliable-host:
ssh -R 1234:localhost:22 tunnel-user#reliable.host ./on-connect.sh
The problem is that once the script exits, it closes the tunnels if they're not in use.
One workaround I've found is to put a long sleep at the end of my script. This however leaves sleep processes around after the connection drops since sleep doesn't respond to SIGHUP. I could put a shorter sleep in an infinite loop (I think) but that feels hacky.
~/on-connect.sh
#!/bin/bash
# Do stuff...
sleep infinity
How can I get ssh to behave like -N has been used so that it stays connected with no activity but also runs a script on initial connection? Ideally without needing to have a special sleep (or equivalent) in the remote script but, if not possible, proper cleanup on the reliable-host when the connection drops.

Make bash wait until remote server kickstart is done (it will create a file when it's done)

I am creating a script to kickstart several servers. I am nearly finished, however I want the bash script to wait until the server kickstart is done.
When the kickstart is done and the server is rebooted a file will be created on the remote kickstarted server which is located under "/root/" and is called "kickstart-DONE"
Is it possible to make the bash script wait until it sees this file and then post something like "Done!"...?
I tried searching the forums and internet, but probably I am searching incorrectly, as I was unable to find something relevant to this issue. Heck, I don't even know if this is possible at all.
So in short; I run my script which kickstarts a server. After the kickstart is done it will create a file on the remote (- kickstarted) server called: kickstart-DONE. This would be an indication for the script that the kickstart is fully done and the server can be used. How do I make the script aware of this?
I hope someone understands what I mean and trying to achieve....
Thanks in advance.
//EDIT
SOLVED! Thanks to Cole Tierney!
Cole Tierney gave some good answers, however though it works it does not wait until the server is kickstarted. I ran the script to kickstart a server and in the end it was running the provided command:
ssh root#$HWNODEIP "while ! test -e /root/kickstart-DONE; do sleep 3; done; echo KICKSTART IS DONE...\!"
However since the kickstart can take some time (depending on server speed and such; ranging from 15 minutes to 1 hour). The command timed out:
ssh: connect to host 100.125.150.175 port 22: Connection timed out
Is there a way that the script does not time out at all and keeps it alive until the server gets back or until it takes more than 1 hour or so?
Maybe there is also a way to make it show that the script is still active? Like "Waiting... 5 minutes passed." "Waiting... 10 minutes passed." etc.
So it gives the current user some information that it not died?
You could call sleep until the file exists:
while ! test -e /root/kickstart-DONE; do sleep 3; done; echo kickstart done
Or sleep until the server is accepting ssh connections. Run the following netcat command locally to check when port 22 is open on the server (remove echo closed; if you don't want the extra feedback):
while ! nc -zw2 $HWNODEIP 22; do echo closed; sleep 3; done
On a side note, it's useful to setup a host entry in ~/.ssh/config. You can add all sorts of ssh options here without making your ssh command unwieldy. Options that are common to all host entries can be added outside of the host entries. See man ssh_config for other options. Here's an example (server1 can be anything, replace <server ip> with the server's ip address):
Host server1
Hostname <server ip>
User root
Then to use it:
ssh server1 'some command'
Note that many systems will not allow ssh connections from root for security reasons. You may want to consider adding another user for kickstart stuff. Add this user to sudoers if root access is needed.

How to run bash code inside netcat connection

So i have a problem : i want to run bash commands when i connect to a server. I have to capture the flag when i connect to a server X via port Y, it stays open only 5-10 sec then closes and you need to guess the random number from 1 to 1000. The for is not the problem (for i in {1..1000} echo $i), the problem is how i make the netcat wait until the connection is done and then run a command. I searched on stackoverflow and a lot of websites and it is not what i need.
while ! nc server port; do
sleep 0.1
done
and somwhere here run the command but if you run it over the while, it will run when the netcat is closed

Running Two C++ file in Bash script

I wrote a bash script to run both client and server.
The code is written in cpp and client and server are executable.
$port=8008
$pack_rate=16
echo "Starting server"
./server -p $port -n 512 -e 0.001
echo "Starting client"
./client -p $port -n 512 -l 16 -s localhost -r $pack_rate -d
echo "end"
In the above case, the client will send data packets to the server and the server will process it.
So, both the client and server should run at the same time.
I tried to run the script file, but as expected only
"Starting server"
is getting printed. So, server is running and server will not terminate until it receives 512 packets from the client. But client process can not start until server ends in the bash script.
So, is there any way by which I can run both the process simultaneously using single bash script?
add & add the end of the ./server line, it will run the process in batch mode, and keep executing the rest of the script
You need to add an &:
./server -p $port -n 512 -e 0.001 &
Thus, the script will not wait the end of the server program to continue.

Resources