How to run 2 commands on bash concurrently - bash

I want to test my server program,(let's call it A) i just made. So when A get executed by this command
$VALGRIND ./test/server_tests 2 >>./test/test.log
,it is blocked to listen for connection.After that, i want to connect to the server in A using
nc 127.0.0.1 1234 < ./test/server_file.txt
so A can be unblocked and continue. The problem is i have to manually type these commands in two different terminals, since both of them block. I have not figured out a way to automated this in a single shell script. Any help would be appreciated.

You can use & to run the process in the background and continue using the same shell.
$VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
If you want the server to continue running even after you close the terminal, you can use nohup:
nohup $VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
For further reference: https://www.computerhope.com/unix/unohup.htm

From the question, it looks if the goal is to build a test script for the server, that will also capture memory check.
For the specific case of building a test script, it make sense to extend the referenced question in the comment, and add some commands to make it unlikely for the test script to hang. The script will cap the time for executing the client, executing the server, and if the test complete ahead of the time, it will attempt to shutdown the server.
# Put the server to the background
(timeout 15 $VALGRIND ./test/server_tests 2 >>./test/test.log0 &
svc_pid=$!
# run the test cilent
timeout 5 nc 127.0.0.1 1234 < ./test/server_file.txt
.. Additional tests here
# Terminate the server, if still running. May use other commands/signals, based on server.
kill -0 $svc_id && kill $svc_pid
wait $svc_pid
# Check log file for error
...

Related

Keep ssh tunnel open after running script

I have a device with intermittent connectivity that "calls home" to open a reverse tunnel to allow me to SSH into it. This works very reliably started by systemd and automatically restarted on any exit:
ssh -R 1234:localhost:22 -N tunnel-user#reliable.host
Now however I want to run a script on the reliable-host on connect. This is easy enough with a simple change to the ssh ... cli: swap -N for a script name on the remote reliable-host:
ssh -R 1234:localhost:22 tunnel-user#reliable.host ./on-connect.sh
The problem is that once the script exits, it closes the tunnels if they're not in use.
One workaround I've found is to put a long sleep at the end of my script. This however leaves sleep processes around after the connection drops since sleep doesn't respond to SIGHUP. I could put a shorter sleep in an infinite loop (I think) but that feels hacky.
~/on-connect.sh
#!/bin/bash
# Do stuff...
sleep infinity
How can I get ssh to behave like -N has been used so that it stays connected with no activity but also runs a script on initial connection? Ideally without needing to have a special sleep (or equivalent) in the remote script but, if not possible, proper cleanup on the reliable-host when the connection drops.

Redirect ssh output to file while performing other commands

I'm looking for some help with a script of mine. I'm new at bash scripting and I'm trying to start a service on a remote host with ssh and then capture all the output of this service to a file in my local host. The problem is that I also want to execute other commands after this one:
ssh $remotehost "./server $port" > logFile &
ssh $remotehost "nc -q 2 localhost $port < $payload"
Now, the first command starts an HTTP server that simply prints out any request that it receives, while the second command sends a request to such server.
Normally, if I were to execute the two commands on two separate shells I would get the first response on the terminal, but now I need it on the file.
I would like to have the server output all the requests on the log file, keeping a sort of open ssh connection to receive any new output of the server process.
I hope I made myself clear.
thank you for your help!
EDIT: Here's the output of the first command:
(Output is empty in the terminal... it waits for requests).
As you can see the commands doesn't return anything yet but it waits.
When I execute the second command on a new terminal (the request), the output of the first terminal is the following:
The request is displayed.
Now I would like to execute both commands in sequence in a bash script, sending the output of the first terminal (which is null until the second command is run) to a file so that ANY output, triggered by later issued requests, is sent to a file.
EDIT2: As of now, with the commands above, the server answers any requests but the output is not registered in the log file.

Make bash wait until remote server kickstart is done (it will create a file when it's done)

I am creating a script to kickstart several servers. I am nearly finished, however I want the bash script to wait until the server kickstart is done.
When the kickstart is done and the server is rebooted a file will be created on the remote kickstarted server which is located under "/root/" and is called "kickstart-DONE"
Is it possible to make the bash script wait until it sees this file and then post something like "Done!"...?
I tried searching the forums and internet, but probably I am searching incorrectly, as I was unable to find something relevant to this issue. Heck, I don't even know if this is possible at all.
So in short; I run my script which kickstarts a server. After the kickstart is done it will create a file on the remote (- kickstarted) server called: kickstart-DONE. This would be an indication for the script that the kickstart is fully done and the server can be used. How do I make the script aware of this?
I hope someone understands what I mean and trying to achieve....
Thanks in advance.
//EDIT
SOLVED! Thanks to Cole Tierney!
Cole Tierney gave some good answers, however though it works it does not wait until the server is kickstarted. I ran the script to kickstart a server and in the end it was running the provided command:
ssh root#$HWNODEIP "while ! test -e /root/kickstart-DONE; do sleep 3; done; echo KICKSTART IS DONE...\!"
However since the kickstart can take some time (depending on server speed and such; ranging from 15 minutes to 1 hour). The command timed out:
ssh: connect to host 100.125.150.175 port 22: Connection timed out
Is there a way that the script does not time out at all and keeps it alive until the server gets back or until it takes more than 1 hour or so?
Maybe there is also a way to make it show that the script is still active? Like "Waiting... 5 minutes passed." "Waiting... 10 minutes passed." etc.
So it gives the current user some information that it not died?
You could call sleep until the file exists:
while ! test -e /root/kickstart-DONE; do sleep 3; done; echo kickstart done
Or sleep until the server is accepting ssh connections. Run the following netcat command locally to check when port 22 is open on the server (remove echo closed; if you don't want the extra feedback):
while ! nc -zw2 $HWNODEIP 22; do echo closed; sleep 3; done
On a side note, it's useful to setup a host entry in ~/.ssh/config. You can add all sorts of ssh options here without making your ssh command unwieldy. Options that are common to all host entries can be added outside of the host entries. See man ssh_config for other options. Here's an example (server1 can be anything, replace <server ip> with the server's ip address):
Host server1
Hostname <server ip>
User root
Then to use it:
ssh server1 'some command'
Note that many systems will not allow ssh connections from root for security reasons. You may want to consider adding another user for kickstart stuff. Add this user to sudoers if root access is needed.

SSH command within a script terminates prematurely

From myhost.mydomain.com, I start a nc listener. Then login to another host to start a netcat push to my host:
nc -l 9999 > data.gz &
ssh repo.mydomain.com "cat /path/to/file.gz | nc myhost.mydomain.com 9999"
These two commands are part of a script. Only 32K bytes are sent to the host and the ssh command terminates, the nc listener gets an EOF and it terminates as well.
When I run the ssh command on the command line (i.e. not as part of the script) on myhost.mydomain.com the complete file is downloaded. What's going on?
I think there is something else that happens in your script which causes this effect. For example, if you run the second command in the background as well and terminate the script, your OS might kill the background commands during script cleanup.
Also look for set -o pipebreak which terminates all the commands in a pipeline when one of them returns with != 0.
On a second note, the approach looks overly complex to me. Try to reduce it to
ssh repo.mydomain.com "cat /path/to/file.gz" > data.gz
(ssh connects stdout of the remote with the local). It's more clear when you write it like this:
ssh > data.gz repo.mydomain.com "cat /path/to/file.gz"
That way, you can get rid of nc. As far as I know, nc is synchronous, so the second invocation (which sends the data) should only return after all the data has been sent and flushed.

For loop in a Script Shell

My script is:
for i in $(seq $nb_lignes) a list of machines
do
ssh root#$machine -x "java ....."
sleep 10
done
--> i execute this script from machine C
i have two machines A and B ($nb_lignes=2)
ssh root#$machineA -x "java ....." : create a node with Pastry overlay
wait 10 secondes
ssh root#$machineB -x "java .....":create another node join the first (that's way i have use sleep 10 secondes)
i run the script from machine C:
i'd like that it display : node 1 is created , wait 10 seconds and display node 2 is created
My problem: it display node 1 is created only
i tape ctrl+c it diplay node 2 is created
PS: the two process java are still runing in machine A and B
Thank you
From the way I'm reading this, armani is correct; since your java program does not exit, the second iteration of the loop doesn't get run until you "break" the first one. I would guess that the Java program is ignoring a break signal sent to it by ssh.
Rather than backgrounding each SSH with a &, you're probably better off using the tools provided to you by ssh itself. From the ssh man page:
-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or
passphrases, but the user wants it in the background. This
implies -n. The recommended way to start X11 programs at a
remote site is with something like ssh -f host xterm.
So ... your script would look something like this:
for host in machineA machineB; do
ssh -x -f root#${host} "java ....."
sleep 10
done
Try the "&" character after the "ssh" command. That spawns the process separately [background] and continues on with the script.
Otherwise, your script is stuck running ssh.
EDIT: For clarity, this would be your script:
for i in $(seq $nb_lignes) a list of machines
do
ssh root#$machine -x "java ....." &
sleep 10
done

Resources