I am trying to copy my profile to a list of servers ( 5K ). I am using a private key to get the authentication working. Everything runs smoothly and as expected but I have this small annoyance :
Sometimes a server does not accept my key ( thats ok, I dont care if a few servers dont receive my profile ) but as the same was rejected, a prompt pops up asking for password and stops the execution until I type CTRL-C to abort it.
How can I make sure SCP uses the key and ONLY the key, and never prompts for any password?
NOTE : Im planning to add an ampersand at the end so all the copies will be done in parallel later.
Code
#!/bin/bash
while read server
do
scp -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rouser/
done <<< "$( cat all_servers.txt )"
-B' Selects batch mode (prevents asking for passwords or passphrases)
Related
so im trying to make an if statement to tell me if an sftp connection was sucessfull or failed, and if its a sucess i want to run a piece of code that automates an sftp download that ive already made.
My problem is that this if statement executes this sftp connection, and then prompts me for a password and stalls the rest of the code.
i wanted to do something like this
if ( sftp -oPort=23 user#server )
then
expect <<-EOF
spawn sftp -oPort=23 user#server
.....
I want to know if its possible for me to make the if statement not execute the sftp connection and then not prompt me , maybe execute it on the background or something.
I would appreciate if someone could tell me if what im asking is possible, or propose a better solution to what im trying to do, thanks
You cannot not-execute a command and then react on the return value of the executed command (because this is what you really want to do: check if you can run sftp successful, and if so do a "proper" run; but you'll never know whether it can run successfull without running it).
So the main question is, what it is what you actually want to test.
If you want to test whether you can do a full sftp connection (with all the handshaking and what not), you could try running sftp in batch-mode (which is handily non-interactive).
E.g. the following runs an sftp session, only to terminate it immediately with a bye command:
if echo bye | sftp -b - -oPort=23 user#server ; then
echo "sftp succeeded"
fi
This will only succeed if the entire sftp session works (that is: you pass any key checks; you can authenticate, ...).
If the server asks you for a password, it will fail to authenticate (being non-interactive), and you won't enter the then body.
If you only want to check whether something is listening on port 23, you can use netcat for this:
if netcat -z server 23; then
echo "port:32 is open"
fi
This will succeed whenever it can successfully bind to port 23 on the server. It doesn't care whether there's an sftp daemon running, or (more likely) a telnet daemon.
You could also do some minimal test whether the remote server looks like an SSH/SFTP server: ssh servers usually greet you with a string indicating that they indeed speak ssh: something like "SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4".
With this information you can then run:
if echo QUIT | netcat server 23 | grep SSH; then
echo "found an ssh server"
fi
WARNING: newbie with bash shell scripting.
I've created the script to connect to multiple remote machines, one by one, check if a certain file has certain text already in it, if it does, move to the next machine and make the same check, if not, append the text to the file, then move to the next machine.
Currently, the script connects to the first remote machine but then does nothing when it connects. If I type exit to close the remote machine's connection, it then continues running the script, which does me no good because I'm not connected to the remote machine any longer.
on a sidenote, I'm not even sure if the rest of the code is correct, so please let me know if there are any glaring mistakes. This is actually my first attempt at writing a shell script from scratch.
#!/bin/bash
REMOTE_IDS=( root#CENSOREDIPADDRESS1
root#CENSOREDIPADDRESS2
root#CENSOREDIPADDRESS3
)
for REMOTE in "{$REMOTE_IDS[#]}"
do
ssh -oStrictHostKeyChecking=no $REMOTE_IDS
if grep LogDNAFormat "/etc/syslog-ng/syslog-ng.conf"
then
echo $REMOTE
echo "syslog-ng already modified. Skipping."
exit
echo -
else
echo $REMOTE
echo "Modifiying..."
echo "\n" >> syslog-ng.conf
echo "### START syslog-ng LogDNA Logging Directives ###" >> syslog-ng.conf
echo "template LogDNAFormat { template(\"<key:CENSOREDKEY> <${PRI}>1 ${ISODATE} ${HOST} ${PROGRAM} ${PID} ${MSGID} - $MSG\n\");" >> syslog-ng.conf
echo "template_escape(no);" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "destination d_logdna {" >> syslog-ng.conf
echo "udp(\"syslog-a.logdna.com\" port(CENSOREDPORT)" >> syslog-ng.conf
echo "template(LogDNAFormat));" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "log {" >> syslog-ng.conf
echo "source(s_src);" >> syslog-ng.conf
echo "destination(d_logdna);" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "### END syslog-ng LogDNA logging directives ###" >> syslog-ng.conf
killall -s 9 syslog-ng
sleep 5
/etc/init.d/syslog start
echo -
fi
done
Great question: Automating procedures via ssh is a laudable goal.
Let's start off with the first error in your code:
ssh -oStrictHostKeyChecking=no $REMOTE_IDS
should be:
ssh -oStrictHostKeyChecking=no $REMOTE
But that won't do everything either. If you want to ssh to run a set of commands, you can, but you'll need to pass those commands in a string as an argment to ssh.
ssh -oStrictHostKeyChecking=no $REMOTE 'Lots of code goes here - newlines ok'
For that to work, you'll need to have passwordless ssh configured ( or you'll be prompted for credentials ). This is covered in steps 1) and 2) in Alexei Grochev's post. One option for passwordless logins is to put public keys on the hosts you want to manage and, if necessary, change the IdentityFile in your local ~/.ssh/config ( you may not need to do this if you are using a default public / private key pair ) .
You've got to be careful with ssh stealing your stdin ( I don't think you'll have a problem in your case ). In the cases that you suspect that the ssh command is reading all your stdin input, you'll need to supply the -n parameter to ssh ( again, I think your code does not suffer from this problem, but I didn't look to carefully ).
I agree with tadman's comment, that this is a good application for Ansible. However, I wouldn't learn Ansible for this task alone. If you intend on doing a lot of remote automation, Ansible would be well worth your time learning and applying to this problem.
What I would suggest is pssh and pscp. These tools are awesome and take care of the "for" loop for you. They also perform the ssh calls in parallel and collect the results.
Here are the steps I would recommend:
1) Install pssh (pscp comes along for the ride).
2) Write your bash program as a separate file. It's so much easier to debug and update , etc. if your program isn't in a bunch of echo statements. Those hurt. Even my original suggestion of ssh user#host 'long string of commands' is difficult to debug. Just create a program file that runs on the remote hosts and debug it on the remote host ( as you can ) .
3) Now go back to your control host ( with that bash program ). Push it to all of the hosts under management with pscp. The syntax is as follows:
# Your bash program is at <local-file-path>
chmod +x <local-file-path>
pscp -h<hosts-file> -l root <local-file-path> <remote-file-path>
The -h option specifies a lists of hosts. So the would look like this:
CENSOREDIPADDRESS1
CENSOREDIPADDRESS2
CENSOREDIPADDRESS3
Incidentally, if you did not set up your public/private keys, you can specify the -A parameter and pscp and pssh will ask you for the root user's password. This isn't great for automation, but if you are doing a one time task it is a lot easier than setting your public/private keys.
4) Now execute that program on the remote hosts:
pssh -h<hosts-file> -i <remote-file-path>
The -i parameter tells pssh to wait for the program to execute on all hosts and return the stdout and stderr results in line.
In summary, pssh/pscp are GREAT for small tasks like this. For larger tasks, consider Ansible ( it basically works by sending python scripts over ssh and executing them remotely ). Puppet/Chef are way overkill for this, but they are fantastic tools for keeping your data center in the state that you want it in.
you can do this with puppet/chef.
but this can also be done with bash if you have patience. I don't want to give actual code because I think its best to understand logic first.
however since you asked, here is the flow you should follow:
make sure you have keys setup for all machines
create config with all the servers
put all servers into an array
create a loop to call each box and run your script (before you will have to scp the script to the home dir on the box so make sure it good to run)
you can also do what you want better imho and that's how I've done it before.
1) make a script to read your file and put it on cron to run every minute or whatever time is best, say echo out #size of file to a log file
2) all servers will have those scripts running so now you just run your script to fetch the data across all servers (iterate through your array of servers in your config file)
^^ that right there can also be done with php where you have an instance of a webserver reading the file. you can also create a web server with bash...since its only for 1 task its not terribly insane.
have fun.
I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.
I could really use some help. I'm still pretty new with expect. I need to launch a scp command directly after I run sftp.
I got the first portion of this script working, my main concern is the bottom portion. I really need to launch a command after this command completes. I'd rather be able to spawn another command than, hack something up like piping this with a sleep command and running it after 10 s or something weird.
Any suggestions are greatly appreciated!
spawn sftp user#host
expect "password: "
send "123\r"
expect "$ "
sleep 2
send "cd mydir\r"
expect "$ "
sleep 2
send "get somefile\r"
expect "$ "
sleep 2
send "bye\r"
expect "$ "
sleep 2
spawn scp somefile user2#host2:/home/user2/
sleep 2
So i figured out I can actually get this to launch the subprocess if I use "exec" instead of spawn.. in other words:
exec scp somefile user2#host2:/home/user2/
the only problem? It prompts me for a password! This shouldn't happen, I already have the ssh-keys installed on both systems. (In other words, if I run the scp command from the host I'm running this expect script on, it will run without prompting me for a password). The system I'm trying to scp to, must be recognizing this newly spawned process as a new host, because its not picking up my ssh-key. Any ideas?
BTW, I apologize I haven't actually posted a "working" script, I can't really do that without comprimising the security of this server. I hope that doesn't detract from anyones ability to assist me.
I think the problem lies with me not terminating the initially spawned process. I don't understand expect enough to do it properly. If I try "close" or "eof", it simply kills the entire script, which I don't want to do just yet (because I still need to scp the file to the second host).
Ensure that your SSH private key is loaded into an agent, and that the environment variables pointing to that agent are active in the session where you're calling scp.
[[ $SSH_AUTH_SOCK ]] || { # if no agent already running...
eval "$(ssh-agent -s)" # ...then start one...
ssh-add /path/to/your/ssh/key # ...load your key...
started_ssh_agent=1 # and flag that we started it ourselves
}
# ...put your script here...
[[ $started_ssh_agent ]] && { # if we started the agent ourselves...
eval "$(ssh-agent -s -k)" # ...then clean up nicely when done.
}
As an aside, I'd strongly suggest replacing the code given in the question with something like the following:
lftp -u user,123 -e 'get /mydir/somefile -o localfile' sftp://host </dev/null
lftp scp://user2#host2 -e 'put localfile -o /home/user2/somefile' </dev/null
Each connection handled in one line, and no silliness messing around with expect.
I am starting ftam server (ft820.rc on CentOS 5) using bash version bash 3.0 and I am having an issue with starting it from the script, namely in the script I do
ssh -nq root#$ip /etc/init.d/ft820.rc start
and the script won't continue after this line, although when I do on the machine defined by $ip
/etc/init.d/ft820.rc start
I will get the prompt back just after the service is started.
This is the code for start in ft820.rc
SPOOLPATH=/usr/spool/vertel
BINPATH=/usr/bin/osi/ft820
CONFIGFILE=${SPOOLPATH}/ffs.cfg
# Set DBUSERID to any value at all. Just need to make sure it is non-null for
# lockclr to work properly.
DBUSERID=
export DBUSERID
# if startup requested then ...
if [ "$1" = "start" ]
then
mask=`umask`
umask 0000
# startup the lock manager
${BINPATH}/lockmgr -u 16
# update attribute database
${BINPATH}/fua ${CONFIGFILE} > /dev/null
# clear concurrency locks
${BINPATH}/finit -cy ${CONFIGFILE} >/dev/null
# startup filestore
${BINPATH}/ffs ${CONFIGFILE}
if [ $? = 0 ]
then
echo Vertel FT-820 Filestore running.
else
echo Error detected while starting Vertel FT-820 Filestore.
fi
umask $mask
I repost here (on request of #Patryk) what I put in the comments on the question:
"is it the same when doing the ssh... in the commandline? ie, can you indeed connect without entering a password, using the pair of private_local_key and the corresponding public_key that you previously inserted in the destination root#$ip:~/.ssh/authorized_keys file ? – Olivier Dulac 20 hours ago "
"you say that, at the commandline (and NOT in the script) you can ssh root#.... and it works without asking for your pwd ? (ie, it can then be run from a script?) – Olivier Dulac 20 hours ago "
" try the ssh without the '-n' and even without -nq at all : ssh root#$ip /etc/init.d/ft820.rc start (you could even add ssh -v , which will show you local (1:) and remote (2:) events in a very verbose way, helping in knowing where it gets stuck exactly) – Olivier Dulac 19 hours ago "
"also : before the "ssh..." line in the script, make another line with, for example: ssh root#ip "set ; pwd ; id ; whoami" and see if that works and shows the correct information. This may help be sure the ssh part is working. The "set" part will also show you the running shell (ex: if it contains BASH= , you're running bash. Otherwise SHELL=... should give a good hint (sometimes not correct) about which shell gets invoked) – Olivier Dulac 19 hours ago "
" please try without the '-n' (= run in background and wait, instead of just run and then quit). It it doesn't work, try adding -t -t -t (3 times) to the ssh, to force it to allocate a tty. But first, please drop the '-n'. – Olivier Dulac 18 hours ago "
Apparently what worked was to add the -t option to the ssh command. (you can go up to put '-t -t -t' to further force it to try to allocate the tty, depending on the situation)
I guess it's because the invoked command expected to be run within an interactive session, and so needed a "tty" to be the stdout
A possibility (but just a wild guess) : the invoked rc script outputs information, but in a buffered environment (ie, when not launched via your terminal), the calling script couldn't see enough lines to fill the buffer and start printing anything out (like when you do a "grep something | somethings else" in a buffered environment and ctrl+c before the buffer was big enough to display anything : you end up thinking no lines were foudn by the grep, whereas there was maybe a few lines already in the buffer). There is tons to be said about buffering, and I am just beginning to read about it all. forcing ssh to allocate a tty made the called command think it was outputting to a live terminal session, and that may have turned off the buffering and allowed the result to show. Maybe in the first case, it worked too, but you could never see the output?