How to run bash code inside netcat connection - bash

So i have a problem : i want to run bash commands when i connect to a server. I have to capture the flag when i connect to a server X via port Y, it stays open only 5-10 sec then closes and you need to guess the random number from 1 to 1000. The for is not the problem (for i in {1..1000} echo $i), the problem is how i make the netcat wait until the connection is done and then run a command. I searched on stackoverflow and a lot of websites and it is not what i need.
while ! nc server port; do
sleep 0.1
done
and somwhere here run the command but if you run it over the while, it will run when the netcat is closed

Related

How to run 2 commands on bash concurrently

I want to test my server program,(let's call it A) i just made. So when A get executed by this command
$VALGRIND ./test/server_tests 2 >>./test/test.log
,it is blocked to listen for connection.After that, i want to connect to the server in A using
nc 127.0.0.1 1234 < ./test/server_file.txt
so A can be unblocked and continue. The problem is i have to manually type these commands in two different terminals, since both of them block. I have not figured out a way to automated this in a single shell script. Any help would be appreciated.
You can use & to run the process in the background and continue using the same shell.
$VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
If you want the server to continue running even after you close the terminal, you can use nohup:
nohup $VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
For further reference: https://www.computerhope.com/unix/unohup.htm
From the question, it looks if the goal is to build a test script for the server, that will also capture memory check.
For the specific case of building a test script, it make sense to extend the referenced question in the comment, and add some commands to make it unlikely for the test script to hang. The script will cap the time for executing the client, executing the server, and if the test complete ahead of the time, it will attempt to shutdown the server.
# Put the server to the background
(timeout 15 $VALGRIND ./test/server_tests 2 >>./test/test.log0 &
svc_pid=$!
# run the test cilent
timeout 5 nc 127.0.0.1 1234 < ./test/server_file.txt
.. Additional tests here
# Terminate the server, if still running. May use other commands/signals, based on server.
kill -0 $svc_id && kill $svc_pid
wait $svc_pid
# Check log file for error
...

Keep ssh tunnel open after running script

I have a device with intermittent connectivity that "calls home" to open a reverse tunnel to allow me to SSH into it. This works very reliably started by systemd and automatically restarted on any exit:
ssh -R 1234:localhost:22 -N tunnel-user#reliable.host
Now however I want to run a script on the reliable-host on connect. This is easy enough with a simple change to the ssh ... cli: swap -N for a script name on the remote reliable-host:
ssh -R 1234:localhost:22 tunnel-user#reliable.host ./on-connect.sh
The problem is that once the script exits, it closes the tunnels if they're not in use.
One workaround I've found is to put a long sleep at the end of my script. This however leaves sleep processes around after the connection drops since sleep doesn't respond to SIGHUP. I could put a shorter sleep in an infinite loop (I think) but that feels hacky.
~/on-connect.sh
#!/bin/bash
# Do stuff...
sleep infinity
How can I get ssh to behave like -N has been used so that it stays connected with no activity but also runs a script on initial connection? Ideally without needing to have a special sleep (or equivalent) in the remote script but, if not possible, proper cleanup on the reliable-host when the connection drops.

Launch the same program several times with delay using bash

I am writing a bash script to test my multi-connection TCP server. The script is supposed to launch the client several times. Here is what I have done so far:
#!/bin/bash
toport=8601
for ((port = 8600; port < 8610; port++));
do
client 10.xml &
replace $port $toport -- "10.xml" #modifying the port in the xml file
((toport=toport+1))
done
As it is going too fast, most of the clients don't have enough time to connect to the server. So I added sleep 1 in the loop, as follows:
#!/bin/bash
toport=8601
for ((port = 8600; port < 8610; port++));
do
client 10.xml &
replace $port $toport -- "10.xml" #modifying the port in the xml file
((toport=toport+1))
sleep 1
done
But for some reason it gets even worse, since no clients are able to connect to the server anymore. Do you have any idea why?
In your script you are running the client in background and putting the sleep statement at the end of the loop modify it like below or run your client in foreground instead of back ground
client 10.xml &
sleep 3
replace $port $toport -- "10.xml" #modifying the port in the xml file
((toport=toport+1))
#sleep 1

Why does my detached loop eventually get killed?

In order to connect my VPS back to my home computer I have this script running on my home computer:
{ while true ; do ssh -nNR 1234:localhost:22 root#12.34.56.78 ; sleep 300 ; done ; } & disown
It starts a reverse ssh tunnel. If the connection gets broken for whatever reason the connection is restarted after 5 minutes. This seemed to be working well at first, but then I noticed that the loop only keeps running for a few days at most.
Why does it stop or get killed?
check how many processes this ssh command have invoked by this loop, you might want to add the following option in your ssh command line:
-o ExitOnForwardFailure=yes
check autossh which is working much nicer. I am using autossh+Cygwin under my home PC and can connect between my office and home for days without any interruption.

How to connect stdin of a list of commands (with pipes) to one of those commands

I need to give the user ability to send/receive messages over the network (using netcat) while the connection is stablished (the user, in this case, is using nc as client). The problem is that I need to send a line before user starts interacting. My first attempt was:
echo 'my first line' | nc server port
The problem with this approach is that nc closes the connection when echo finishes its execution, so the user can't send commands via stdin because the shell is given back to him (and also the answer from server is not received because it delays some seconds to start answering and, as nc closes the connection, the answer is never received by the user).
I also tried grouping commands:
{ echo 'my first line'; cat -; } | nc server port
It works almost the way I need, but if server closes the connection, it will wait until I press <ENTER> to give me the shell again. I need to get the shell back when the server closes the connection (in this case, the client - my nc command - will never closes the connection, except if I press Ctrl+C).
I also tried named pipes, without success.
Do you have any tip on how to do it?
Note: I'm using openbsd-netcat.
You probably want to look into expect(1).
It is cat that wait for the 'enter'.
You may write a script execute after nc to kill the cat and it will return to shell automatically.
You can try this to see if it works for you.
perl -e "\$|=1;print \"my first line\\n\" ; while (<STDIN>) {print;}" | nc server port
This one should produce the behaviour you want:
echo "Here is your MOTD." | nc server port ; nc server port
I would suggest you use cat << EOF, but I think it will not work as you expect.
I don't know how you can send EOF when the connection is closed.

Resources