How to run a TCP/IP game on a cluster - bash

I wrote a TCP/IP game that involves one server and two clients who then play the game, write some statistics about it and close. (It's two AIs playing a game)
I wrote a shell script that opens those three child scripts. However, currently no statistics are being written. Since sometimes the setup stage (clients connect to server) works and sometimes not even that, I assume that those children are wrongly distributed over the cores and can't communicate with the server. (?)
How would I generally solve this problem? Perhaps not with tmux? Running SGE, version 6.2u3beta.
Here's my shell script:
#!/bin/bash
# This script is supposed to take a json problem instance (name is problemNNNNN.json)
# Then open server with -i problem$SGE_TASK_ID.json -o -p open-port,
# then open two clients with -p open-port.
#$ -S /bin/bash
#$ -m n
#$ -l h_vmem=4G
## Tasks
#$ -t 1-1
#$ -cwd
problem_file=problem$SGE_TASK_ID.json
function find_open_port(){
# Ports between 49152 - 65535 are usually unused.
port=$(shuf -i '49152-65535' -n '1')
# Then check if port is open
if lsof -Pi :$port -sTCP:LISTEN -t >/dev/null ; then
find_open_port
else
# There is no service currently running on this port
return $port
fi
}
find_open_port
tmux new-session -d -s '$SGE_TASK_ID' "python server.py -p $port -i $problem_file"
sleep 1
tmux split-window -v -t '$SGE_TASK_ID' "python client.py -p $port"
sleep 1
tmux split-window -h -t '$SGE_TASK_ID' "python client.py -p $port"
exit

Related

writing shell ssh script for uploading compiled sketch on multiple arduino yun in network

I work with a couple of arduino yuns and want to write a script to upload sketches on multiple of them. Let's assume I have a compiled arduino program:sketch.hex.
Now I'd like to upload this file via LAN. For a single device it works like this.
Copying the sketch onto the device. (password required)
scp sketch.hex root#yun1.local:/tmp/sketch.hex
Opening an ssh session with the device. (password required)
ssh root#yun1.local
And then load the program onto the Atmega with the following 2 commands.
merge-sketch-with-bootloader.lua /tmp/sketch.hex
run-avrdude /tmp/sketch.hex
Now my question would be, how to do this for multiple arduinos (yun1,yun2,...,yunN) without entering actually ssh-ing into each single device in order to run the bottom 2 commands.
Hope the question is not too confusing and thanks a lot in advance.
Update: could figure it out myself. Here is the code in case someone needs it.
#!/bin/sh
# globalUpload.sh
#
#
# Created by maggu on 21/02/16.
#
clear
FILENAME="valve_adjusting.hex"
SSHPASS="doghunter"
SSHCOMMAND="ssh -p 22 -T -o StrictHostKeyChecking=no -o BatchMode=no"
PREFIX="root#linino"
PREFIXO="linino"
SUFFIX=".local"
YUNS=8
for i in `seq 1 $YUNS`
do
SSHACCOUNT=$PREFIX$i$SUFFIX
ssh-keygen -R $PREFIXO$i$SUFFIX
sshpass -p "doghunter" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sketch.hex $SSHACCOUNT:/tmp/sketch.hex
sshpass -p $SSHPASS $SSHCOMMAND $SSHACCOUNT << EOF_run_commands
merge-sketch-with-bootloader.lua /tmp/sketch.hex
run-avrdude /tmp/sketch.hex
EOF_run_commands
done
#!/bin/sh
# globalUpload.sh
#
#
# Created by maggu on 21/02/16.
#
clear
FILENAME="valve_adjusting.hex"
SSHPASS="doghunter"
SSHCOMMAND="ssh -p 22 -T -o StrictHostKeyChecking=no -o BatchMode=no"
PREFIX="root#linino"
PREFIXO="linino"
SUFFIX=".local"
YUNS=8
for i in `seq 1 $YUNS`
do
SSHACCOUNT=$PREFIX$i$SUFFIX
ssh-keygen -R $PREFIXO$i$SUFFIX
sshpass -p "doghunter" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sketch.hex $SSHACCOUNT:/tmp/sketch.hex
sshpass -p $SSHPASS $SSHCOMMAND $SSHACCOUNT << EOF_run_commands
merge-sketch-with-bootloader.lua /tmp/sketch.hex
run-avrdude /tmp/sketch.hex
EOF_run_commands
done

How to run shell script with sudo inside through nohup on remote machine

I ran the following command from my local machine:
ssh -i key remote_host "nohup sh test.sh > nohup.out 2> nohup.err < /dev/null &"
then I got error: sudo: sorry, you must have a tty to run sudo
I added -tt option:
ssh -tt -i key remote_host "nohup sh test.sh > nohup.out 2> nohup.err < /dev/null &"
I checked on the remote, test.sh was not running (there was no process id).
I took out the nohup, everything runs fine, ssh -tt -i key remote_host "sh test.sh" but I need to use nohup. Can someone help me? Thanks a lot!
One remote_host: test.sh script:
#!/bin/bash
sudo iptables -A OUTPUT -p tcp --dport 443 -j DROP
sleep 30
sudo iptables -D OUTPUT -p tcp --dport 443 -j DROP
sudo is probably trying to prompt you for a password. You need to set up NOPASSWD in your remote_host's sudoers file or you can use expect

Bash script upd error

I execute my bash script PLCCheck as process
./PLCCheck &
PLCCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OKConnection" | netcat -u -c $HOST $PORT
done < <(netcat -u -l -p 6001)
It listens on UDP Port 6001.
When I want to execute my second bash script SQLCheck as process that listens on UDP Port 4001
./SQLCheck &
SQLCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OPENEF1" | netcat -u -c $HOST $PORT
done < <(nc -l -p 4001)
I got this error:
Error: Couldn't setup listening socket (err=-3)
Port 6001 and 4001 are open in the iptables and both scripts work as a single process. Why do I get this error?
I have checked the man page of nc. I think it is used on a wrong way:
-l Used to specify that nc should listen for an incoming connection rather
than initiate a connection to a remote host. It is an error to use this
option in conjunction with the -p, -s, or -z options. Additionally,
any timeouts specified with the -w option are ignored.
...
-p source_port
Specifies the source port nc should use, subject to privilege restrictions
and availability. It is an error to use this option in conjunction with the
-l option.
According to this one should not use -l option with -p option!
Try to use without -p, just nc -l 4001. Maybe this is the error...

open gnome terminal tabs programmatically and execute commands in sequence

When working remotely, I have a series of tabs that I open in gnome-terminal, and commands that I execute in them. I would like to automate all this setup as a single command.
If these commands could run independently and in parallel, I'd just adapt the answer to this question. In fact, I tried, using the following shell script:
gnome-terminal --working-directory="/home/superelectric" --tab -t "gate" -e 'bash -c "export BASH_POST_RC=\"ssh gate_tunnel\"; exec bash"' --tab -t "mydesktop" -e 'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
Spread out over multiple lines, for readability:
gnome-terminal \
--working-directory="/home/superelectric" \
--tab \
-t "gate" \
-e \
'bash -c "export BASH_POST_RC=\"ssh gate_tunnel\"; exec bash"' \
--tab \
-t "mydesktop" \
-e \
'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
The first part opens a tab, names it 'gate', and executes 'ssh gate_tunnel' within it. This is an ssh alias that opens a tunnel to 'mydesktop' at school, through the school's outward-facing server, 'gate'.
The second part opens another tab, names it 'mydesktop', and executes 'ssh tunneled_mydesktop' within it. This is another ssh alias, which connects to mydesktop through the tunnel.
~/.ssh/config:
Host gate_tunnel
LocalForward 8023 <my_desktop_at_school>:22
HostName <my_school_server>
That's the theory. In practice, the two commands execute in parallel, whereas I need to ensure that the first tab's command (open tunnel) completes before executing the second tab's command (connect through tunnel).
Is there maybe some command I can execute in the second tab, that 'waits' until the ssh tunnel is opened?
Ok, I think i get it. As i mentioned in the comments the first thing that comes to mind for reaching your school desktop from the outside is to ssh into the school gate and from there ssh into your desktop with something like:
$ ssh -t gate.school.edu ssh desktop_name
There's only one tab then, so your problem doesn't exist.
However there's something very cool with your current setup:
From home it's almost as if you had a direct connection to your desktop machine, so you can scp into it directly and forget about gate. With the solution above that's not possible anymore because we end up with an indirect connection: If you want to scp you have to do it from gate and that sucks.
Check out this article on using ssh's ProxyCommand feature:
Transparent Multi-hop SSH
You get the best of both worlds then :)
Hmm... this may not be a perfect solution. Ideally you should use something that monitors the ssh connection. But, you can check the ssh process with ps. And wait for ssh command to come alive.
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do # try 10 times
if ps aux ¦ grep <my_desktop_at_school> then
# the tunnel connected now execute the second command
'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
else
continue # or you could do something here if you wish
fi
sleep 10 # sleep for 10 seconds and try again
let COUNTER=COUNTER+1
done
You will have to run this script in the second tab.
Hope it helps.

how can know ssh is disconected and retry with bash script

I'm using reverse ssh for connecting to remote client , Operator run reverse one time and leave client system
how can i write bash script , when reverse ssh disconnected from server retry to connect to server (ssh)
Use autossh. Autossh "automatically restart[s] SSH sessions and tunnels"
sudo apt-get install autossh
I use autossh to to keep open reverse tunnel that I depend on. It works very well, even with long periods of lost connection.
Here is the script I use to create the tunnel:
#!/bin/bash
AUTOSSH_GATETIME=0
export AUTOSSH_GATETIME
autossh -f -N -R 8022:localhost:22 username#host -o "ServerAliveInterval 45" -o "ServerAliveCountMax 2"
I execute this script at boot with this cronjob:
#reboot /home/scripts/./persistent-tunnel.sh
If you simply want to retry a command until it succeeds, you can use this pattern:
while ! ssh [...]
do
echo "Command failed, retrying..." >&2
done
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
#/bin/bash
if [ -z "$1" ]
then
echo '''
Please also provide ssh connection details.
'''
exit 1
fi
retries=0
repeat=true
today=$(date)
while "$repeat"
do
((retries+=1)) &&
echo "Try number $retries..." &&
today=$(date) &&
ssh "$#" &&
repeat=false
sleep 5
done
echo """
Disconnected sshx after a successful login.
Total number of tries = $retries
Connected at:
$today
"""
You might want to take a look into ssh options ServerAliveInterval, ServerAliveCountMax and TCPKeepAlive because sometimes your line dies without making this obvious, let me demonstrate:
#!/bin/sh
while true; do
ssh -T user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
pkill -f "^sshd:\ user\ \ \ \ $" # needs to be edited for nearly every case
sleep 2
ssh -T -N user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
-o Batchmode=yes \
-o ExitOnForwardFailure=yes \
-o ServerAliveCountMax=1 \
-o ServerAliveInterval=60 \
-o LocalForward=127.0.0.1:2501=127.0.0.1:25 \
-o RemoteForward=127.0.0.1:2501=127.0.0.1:25
sleep 60
done
You can use netstat -ntp | grep ":22" or ss -ntp | grep ":22" to see established connections to ssh port, then use grep to filter the ip address you're looking for. If you don't find a connection then reconnect the tunnel.
Use autossh if it works on your version of Linux. It did not on mine as it was an outdated Linux distribution for a custom NAS box.
The alternative is a simple bash script in crontab like this:
maintain_reverse_ssh_tunnel.sh
if ! netstat -planet |grep myserver_ip_or_name |grep ESTABLISHED > /dev/null; then
echo "REVERSE SSH DOWN - Restarting the tunnels"
ssh -fN -R 32999:localhost:22 -R 28080:localhost:80 myusername#myserver_ip_or_name
fi
Replace myusername and myserver_ip_or_name with those of your user and server.
Then add an entry to crontab by typing crontab -e and adding the following line:
1 * * * * /path_to_my_script/maintain_reverse_ssh_tunnel.sh
Make sure to have the execute permissions on the script:
chmod 755 maintain_reverse_ssh_tunnel.sh

Resources