Loop broken by ssh running script aside - bash

I have a couple of machines to update some script on. I can do this with a small bash script on my side, which consists of one while loop for reading IPs from a list and calling scp for them. It works fine, but when I try to run updated script in a loop, it break the loop, although runs quite fine itself.
#!/bin/bash
cat ip_list.txt | while read i; do
echo ${i}
scp the_script root#${i}:/usr/sbin/ # works ok
ssh root#${i} /usr/sbin/the_script # works for a first IP, then breaks
done
Is this how it suppose to work? If so, how can I run a script remotely via ssh without breaking the loop?

Use this:
ssh -n root#${i} /usr/sbin/the_script # works for a first IP, then breaks
The -n option tells ssh not to read from stdin. Otherwise, it reads stdin and passes it through to the network connection, and this consumes the rest of the input pipe.

You need to change the ssh line like this
ssh root#${i} /usr/sbin/the_script < /dev/null # works for a first IP, then breaks

Related

Capturing ssh output in bash script while backgrounding connection

I have a loop that will connect to a server via ssh to execute a command. I want to save the output of that command.
o=$(ssh $s "$#")
This works fine. I can then do what I need with the output. However I have a lot of servers to run this against and I'm trying to speed up the process by backgrounding the ssh connection, basically to do all of the requests at once. If I wasn't saving the output I could do something like
ssh $s "$#" &
and this works fine
I haven't been able to get the correct combination to do both.
o=$(ssh $s "$#")&
This doesn't give me any output. Other combinations I've tried appear to try to execute the output. Suggestions?
Thanks!
A process going to the background gets its own copies of the file descriptors. The stdout (o=..) will not be available in the calling process. However, you can bind the stdout to a file and access the file.
ssh $s "$#" >outfile &
wait
o=$(cat outfile)
If you don't like files, you could also use named pipes. This way the 'wait' is done by the 'cat' command. The pipe can be reused and consumes no space on the disk.
mkfifo testpipe
ssh $s "$#" >testpipe &
o=$(cat testpipe)
I would just use a temporary file. You can't set a variable in a background process and access it from the shell that started it.
ssh "$s" "$#" > output.txt & ssh_pid=$!
...
wait "$ssh_pid"
o=$(<output.txt)

IFS read not getting executed completely when using commands over remote in linux

I am reading a file through a script using the below method and storing it in myArray
while IFS=$'\t' read -r -a myArray
do
"do something"
done < file.txt
echo "ALL DONE"
Now in the "do something" area I am using some commands over ssh
ssh user#$SERVER "some command"
But the issue is after executing this for the 1st line of file.txt, the script stops reading the file further and skips to next step that is I get the output
ALL DONE
But instead of commands over ssh I use local commands the scripts run file. I am not sure why this is happening. Can someone please suggest what I need to do?
You'll have to try giving the -n flag to ssh, from the manpage:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin). This must be used when ssh is run in the background. A
common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will
start an emacs on shadows.cs.hut.fi, and the X11 connection will
be automatically forwarded over an encrypted channel. The ssh
program will be put in the background. (This does not work if
ssh needs to ask for a password or passphrase; see also the -f
option.)

How can I start an ssh session with a script without redirecting stdin?

I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.

bash script to ssh multiple servers in a Loop and issue commands

I have a text file in which I have a list of servers. I'm trying to read the server one by one from the file, SSH in the server and execute ls to see the directory contents. My loop runs just once when I run the SSH command, however, for SCP it runs for all servers in the text file and exits, I want the loop to run till the end of text file for SSH. Following is my bash script, how can I make it run for all the servers in the text file while doing SSH?
#!/bin/bash
while read line
do
name=$line
ssh abc_def#$line "hostname; ls;"
# scp /home/zahaib/nodes/fpl_* abc_def#$line:/home/abc_def/
done < $1
I run the script as $ ./script.sh hostnames.txt
The problem with this code is that ssh starts reading data from stdin, which you intended for read line. You can tell ssh to read from something else instead, like /dev/null, to avoid eating all the other hostnames.
#!/bin/bash
while read line
do
ssh abc_def#"$line" "hostname; ls;" < /dev/null
done < "$1"
A little more direct is to use the -n flag, which tells ssh not to read from standard input.
Change your loop to a for loop:
for server in $(cat hostnames.txt); do
# do your stuff here
done
It's not parallel ssh but it works.
I open-sourced a command line tool called Overcast to make this sort of thing easier.
First you import your servers:
overcast import server.01 --ip=1.1.1.1 --ssh-key=/path/to/key
overcast import server.02 --ip=1.1.1.2 --ssh-key=/path/to/key
Once that's done you can run commands across them using wildcards, like so:
overcast run server.* hostname "ls -Al" ./scriptfile
overcast push server.* /home/zahaib/nodes/fpl_* /home/abc_def/

In my bash loop over a list of some servers the script exits after execution a perl script with ssh

I have a problem similiar to this one:
in my bash loop over a list of some servers, if the ssh connects the bash script exits
Unfortunately ssh is called from a perl script I can't edit (so I won't be able to add -n to ssh commad).
What else could be done?
put a fake ssh in your path that delegates the call to the real ssh and adds -n
I did:
my_script < /dev/null
and it works just fine.

Resources