I need to pass values from an array to the script on remote host.
The remote script creates files locally on each array value.
Yes, i can do it by:
for i in ${LIST[#]}
do ssh root#${servers} bash "/home/test.sh" "$i"
done
but this action is rather slow and it makes ssh session on every array value
ssh root#${servers} bash "/home/test.sh" "${LIST[#]}"
by this code i get an error:
bash: line 1338: command not found
How can i do it?
Use the connection-sharing feature of ssh so that you only have a single, preauthenticated connection that is used by each ssh process in your loop.
# This is the socket all of the following ssh processes will use
# to establish a connection to the remote host.
socket=~/.ssh/ssh_mux
# This starts a background process that does nothing except keep the
# authenticated connection open on the specified socket file.
ssh -N -M -o ControlPath="$socket" root#${servers} &
# Each ssh process should start much more quickly, as it doesn't have to
# go through the authentication protocol each time.
for i in "${LIST[#]}"; do
# This uses the existing connection to avoid having to authenticate again
ssh -o ControlPath="$socket" root#${servers} "bash /home/test.sh '$i'"
# The above command is still a little fragile, as it assumes that $i
# doesn't have a single quote in its name.
done
# This closes the connection master
ssh -o ControlPath="$socket" -O exit root#{servers}
The alternative is to try to move your loop into the remote command, but this is fragile as the array isn't defined on the remote host, and there is no good way to transfer each element in a way that protects each element. If you weren't concerned about word-splitting, you could use
ssh root#${servers} "for i in ${LIST[*]}; do bash /home/test.sh \$i; done"
but then you probably wouldn't be using an array in the first place.
Related
I'm working on a bash script to connect to a server via SSH that is running sish (https://github.com/antoniomika/sish). This will essentially create a port forward on the internet like ngrok using only SSH. Here is what happens during normal usage.
The command:
ssh -i ./tun -o StrictHostKeyChecking=no -R 5900:localhost:5900 tun.domain.tld sleep 10
The response:
Starting SSH Forwarding service for tcp:5900. Forwarded connections can be accessed via the following methods:
TCP: tun.domain.tld:43345
Now I need to send the ssh command to the background and figure out some way of capturing the response from the server as a variable so that I can grab the port that sish has assigned and send that somewhere (probably a webhook). I've tried a few things like using -f and piping to a file or named pipe and trying to cat it, but the issue is that the piping to the file never works and although the file is created, it's always empty. Any assistance would be greatly appreciated.
If you're running a single instance of sish (and the tunnel you're attempting to define) you can actually have sish bind the specific part you want (in this case 5900).
You just set the --bind-random-ports=false flag on your server command in order to tell sish that it's okay to not use random ports.
If you don't want to do this (or you have multiple clients that will expose this same port), you can use a simple script like the following:
#!/bin/bash
ADDR=""
# Start the tunnel. Use a phony command to tell ssh to clean the output.
exec 3< <(ssh -R 5900:localhost:5900 tun.domain.tld foobar 2>&1 | grep --line-buffered TCP | awk '{print $2; system("")}')
# Get our buffered output that is now just the address sish has given to us.
for i in 1; do
read <&3 line;
ADDR="$line"
done
# Here is where you'd call the webhook
echo "Do something with $ADDR"
# If you want the ssh command to continue to run in the background
# you can omit the following. This is to wait for the ssh command to
# exit or until this script dies in order to kill the ssh command.
PIDS=($(pgrep -P $(pgrep -P $$)))
function killssh() {
kill ${PIDS[0]}
}
trap killssh EXIT
while kill -0 ${PIDS[0]} 2> /dev/null; do sleep 1; done;
sish also has an admin api which you can scrape. The information on that is available here.
References: I build and maintain sish and use it myself (as well as a similar type of script).
I am reading a file through a script using the below method and storing it in myArray
while IFS=$'\t' read -r -a myArray
do
"do something"
done < file.txt
echo "ALL DONE"
Now in the "do something" area I am using some commands over ssh
ssh user#$SERVER "some command"
But the issue is after executing this for the 1st line of file.txt, the script stops reading the file further and skips to next step that is I get the output
ALL DONE
But instead of commands over ssh I use local commands the scripts run file. I am not sure why this is happening. Can someone please suggest what I need to do?
You'll have to try giving the -n flag to ssh, from the manpage:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin). This must be used when ssh is run in the background. A
common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will
start an emacs on shadows.cs.hut.fi, and the X11 connection will
be automatically forwarded over an encrypted channel. The ssh
program will be put in the background. (This does not work if
ssh needs to ask for a password or passphrase; see also the -f
option.)
I have a bash script which is doing very plain sftp to transfer data to production and uat servers. See my code below.
if [ `ls -1 ${inputPath}|wc -l` -gt 0 ]; then
sh -x wipprod.sh >> ${sftpProdLog}
sh -x wipdev.sh >> ${sftpDevLog}
sh -x wipdevone.sh >> ${sftpDevoneLog}
fi
sometimes the UAT server may go down. In those cases the number of scripts hanged are getting increased. If it reaches the user max. number of process the other scripts also getting affected. So I am thinking before executing each of the above script i have to test the port 22 availability on the destination server. Then I can execute the script.
Is this the right way? If yes what is the optimistic way to do that? If no what else is the best approach to avoid unnecessary sftp connection when destination not available? Thanks in advance.
Use sftp in batch mode together with ConnectTimeout option explicitely set. Sftp will take care about up/down problems by itself.
Note, that ConnectTimeout should be slightly higher if your network is slow.
Then put sftp commands into your wip*.sh backup scripts.
If UAT host is up:
[localuser#localhost tmp]$ sftp -b - -o ConnectTimeout=1 remoteuser#this_host_is_up <<<"put testfile.xml /tmp/"; echo $?
sftp> put testfile.xml /tmp/
Uploading testfile.xml to /tmp/testfile.xml
0
File is uploaded, sftp exits with exit code 0.
If UAT host is down, sftp exits wihin 1 second with exit code 255.
[localuser#localhost tmp]$ sftp -b - -o ConnectTimeout=1 remoteuser#this_host_is_down <<<"put testfile.xml /tmp/"; echo $?
ssh: connect to host this_host_is_down port 22: Connection timed out
Couldn't read packet: Connection reset by peer
255
It sounds reasonable - if the server is inaccessible you want to immediately report an error and not try and block.
The question is - why does the SFTP command block if the server is unavailable? If the server is down, then I'd expect the port open to fail almost immediately and you need only detect that the SFTP copy has failed and abort early.
If you want to detect a closed port in bash, you can simply as bash to connect to it directly - for example:
(echo "" > /dev/tcp/remote-host/22) 2>/dev/null || echo "failed"
This will open the port and immediately close it, and report a failure if the port is closed.
On the other hand, if the server is inaccessible because the port is blocked (in a firewall, or something, that drops all packets), then it makes sense for your process to hang and the base TCP test above will also hang.
Again this is something that should probably be handled by your SFTP remote copy using a timeout parameter, as suggested in the comments, but a bash script to detect blocked port is also doable and will probably look something like this:
(
(echo "" > /dev/tcp/remote-host/22) &
pid=$!
timeout=3
while kill -0 $pid 2>/dev/null; do
sleep 1
timeout=$(( $timeout - 1 ))
[ "$timeout" -le 0 ] && kill $pid && exit 1
done
) || echo "failed"
(I'm going to ignore the ls ...|wc business, other than to say something like find and xargs --no-run-if-empty are generally more robust if you have GNU find, or possibly AIX has an equivalent.)
You can perform a runtime connectivity check, OpenSSH comes with ssh-keyscan to quickly probe an SSH server port and dump the public key(s), but sadly it doesn't provide a usable exit code, leaving parsing the output as a messy solution.
Instead you can do a basic check with a bash one-liner:
read -t 2 banner < /dev/tcp/127.0.0.1/22
where /dev/tcp/127.0.0.1/22 (or /dev/tcp/hostname/ssh) indicates the host and port to connect to.
This relies on the fact that the SSH server will return an identifying banner terminated with CRLF. Feel free to inspect $banner. If it fails after the indicated timeout read will receive SIGALARM (exit code 142), and connection refused will result in exit code 1.
(Support for /dev/tcp and network redirection is enabled by default since before bash-2.05, though it can be disabled explicitly with --disable-net-redirections or with --enable-minimal-config at build time.)
To prevent such problems, an alternative is to set a timeout: with any of the ssh, scp or sftp commands you can set a connection timeout with the option -o ConnectTimeout=15, or, implicitly via ~/.ssh/config:
Host 1.2.3.4 myserver1
ConnectionTimeout 15
The commands will return non-zero on timeout (though the three commands may not all return the same exit code on timeout). See this related question: how to make SSH command execution to timeout
Finally, if you have GNU parallel you may use its sem command to limit concurrency to prevent this kind of problem, see https://unix.stackexchange.com/questions/168978/limit-maximum-number-of-concurrent-scp-processes-running-on-a-host .
I am writing a shell script on Solaris to check if the files on the Remote Host is done writing before transferring over to Local Host. I have done a skeleton, but there are certain parts I am not sure on how to do. I did a little reading on the commands to check file size, it is stat -c %s LogFiles.txt but I am not sure as to how to check it over in the Remote Host.
# Get File Size on Remote Host
INITIALSIZE =
sleep 5
# Get File Size on Remote Host Again
LATESTSIZE =
#Loop 5 times
for i in {1..5}
do
if [ "$INITIALSIZE" -ne "$LATESTSIZE"]
then
sleep 5
# Get File Size on Remote Host
LATESTSIZE=
else
scp -P 22 $id#$ip:$srcpath/\*.txt $destpath
break
done
Assuming that your measurement for 'done' is "file size constant for 5 sec", you can simply use ssh as follows:
ssh user#remote.machine "command to execute"
this can be piped or set as variable on the local machine e.g. in your case:
latestsize=$( ssh user#remote.machine "<sizedeterminer> <file>" )
Passwordless login of course would skip the askpass problem. See point 3.3 in this manual or an example here.
I have a couple of machines to update some script on. I can do this with a small bash script on my side, which consists of one while loop for reading IPs from a list and calling scp for them. It works fine, but when I try to run updated script in a loop, it break the loop, although runs quite fine itself.
#!/bin/bash
cat ip_list.txt | while read i; do
echo ${i}
scp the_script root#${i}:/usr/sbin/ # works ok
ssh root#${i} /usr/sbin/the_script # works for a first IP, then breaks
done
Is this how it suppose to work? If so, how can I run a script remotely via ssh without breaking the loop?
Use this:
ssh -n root#${i} /usr/sbin/the_script # works for a first IP, then breaks
The -n option tells ssh not to read from stdin. Otherwise, it reads stdin and passes it through to the network connection, and this consumes the rest of the input pipe.
You need to change the ssh line like this
ssh root#${i} /usr/sbin/the_script < /dev/null # works for a first IP, then breaks