run shell script on multiple remote machines - bash

WARNING: newbie with bash shell scripting.
I've created the script to connect to multiple remote machines, one by one, check if a certain file has certain text already in it, if it does, move to the next machine and make the same check, if not, append the text to the file, then move to the next machine.
Currently, the script connects to the first remote machine but then does nothing when it connects. If I type exit to close the remote machine's connection, it then continues running the script, which does me no good because I'm not connected to the remote machine any longer.
on a sidenote, I'm not even sure if the rest of the code is correct, so please let me know if there are any glaring mistakes. This is actually my first attempt at writing a shell script from scratch.
#!/bin/bash
REMOTE_IDS=( root#CENSOREDIPADDRESS1
root#CENSOREDIPADDRESS2
root#CENSOREDIPADDRESS3
)
for REMOTE in "{$REMOTE_IDS[#]}"
do
ssh -oStrictHostKeyChecking=no $REMOTE_IDS
if grep LogDNAFormat "/etc/syslog-ng/syslog-ng.conf"
then
echo $REMOTE
echo "syslog-ng already modified. Skipping."
exit
echo -
else
echo $REMOTE
echo "Modifiying..."
echo "\n" >> syslog-ng.conf
echo "### START syslog-ng LogDNA Logging Directives ###" >> syslog-ng.conf
echo "template LogDNAFormat { template(\"<key:CENSOREDKEY> <${PRI}>1 ${ISODATE} ${HOST} ${PROGRAM} ${PID} ${MSGID} - $MSG\n\");" >> syslog-ng.conf
echo "template_escape(no);" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "destination d_logdna {" >> syslog-ng.conf
echo "udp(\"syslog-a.logdna.com\" port(CENSOREDPORT)" >> syslog-ng.conf
echo "template(LogDNAFormat));" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "log {" >> syslog-ng.conf
echo "source(s_src);" >> syslog-ng.conf
echo "destination(d_logdna);" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "### END syslog-ng LogDNA logging directives ###" >> syslog-ng.conf
killall -s 9 syslog-ng
sleep 5
/etc/init.d/syslog start
echo -
fi
done

Great question: Automating procedures via ssh is a laudable goal.
Let's start off with the first error in your code:
ssh -oStrictHostKeyChecking=no $REMOTE_IDS
should be:
ssh -oStrictHostKeyChecking=no $REMOTE
But that won't do everything either. If you want to ssh to run a set of commands, you can, but you'll need to pass those commands in a string as an argment to ssh.
ssh -oStrictHostKeyChecking=no $REMOTE 'Lots of code goes here - newlines ok'
For that to work, you'll need to have passwordless ssh configured ( or you'll be prompted for credentials ). This is covered in steps 1) and 2) in Alexei Grochev's post. One option for passwordless logins is to put public keys on the hosts you want to manage and, if necessary, change the IdentityFile in your local ~/.ssh/config ( you may not need to do this if you are using a default public / private key pair ) .
You've got to be careful with ssh stealing your stdin ( I don't think you'll have a problem in your case ). In the cases that you suspect that the ssh command is reading all your stdin input, you'll need to supply the -n parameter to ssh ( again, I think your code does not suffer from this problem, but I didn't look to carefully ).
I agree with tadman's comment, that this is a good application for Ansible. However, I wouldn't learn Ansible for this task alone. If you intend on doing a lot of remote automation, Ansible would be well worth your time learning and applying to this problem.
What I would suggest is pssh and pscp. These tools are awesome and take care of the "for" loop for you. They also perform the ssh calls in parallel and collect the results.
Here are the steps I would recommend:
1) Install pssh (pscp comes along for the ride).
2) Write your bash program as a separate file. It's so much easier to debug and update , etc. if your program isn't in a bunch of echo statements. Those hurt. Even my original suggestion of ssh user#host 'long string of commands' is difficult to debug. Just create a program file that runs on the remote hosts and debug it on the remote host ( as you can ) .
3) Now go back to your control host ( with that bash program ). Push it to all of the hosts under management with pscp. The syntax is as follows:
# Your bash program is at <local-file-path>
chmod +x <local-file-path>
pscp -h<hosts-file> -l root <local-file-path> <remote-file-path>
The -h option specifies a lists of hosts. So the would look like this:
CENSOREDIPADDRESS1
CENSOREDIPADDRESS2
CENSOREDIPADDRESS3
Incidentally, if you did not set up your public/private keys, you can specify the -A parameter and pscp and pssh will ask you for the root user's password. This isn't great for automation, but if you are doing a one time task it is a lot easier than setting your public/private keys.
4) Now execute that program on the remote hosts:
pssh -h<hosts-file> -i <remote-file-path>
The -i parameter tells pssh to wait for the program to execute on all hosts and return the stdout and stderr results in line.
In summary, pssh/pscp are GREAT for small tasks like this. For larger tasks, consider Ansible ( it basically works by sending python scripts over ssh and executing them remotely ). Puppet/Chef are way overkill for this, but they are fantastic tools for keeping your data center in the state that you want it in.

you can do this with puppet/chef.
but this can also be done with bash if you have patience. I don't want to give actual code because I think its best to understand logic first.
however since you asked, here is the flow you should follow:
make sure you have keys setup for all machines
create config with all the servers
put all servers into an array
create a loop to call each box and run your script (before you will have to scp the script to the home dir on the box so make sure it good to run)
you can also do what you want better imho and that's how I've done it before.
1) make a script to read your file and put it on cron to run every minute or whatever time is best, say echo out #size of file to a log file
2) all servers will have those scripts running so now you just run your script to fetch the data across all servers (iterate through your array of servers in your config file)
^^ that right there can also be done with php where you have an instance of a webserver reading the file. you can also create a web server with bash...since its only for 1 task its not terribly insane.
have fun.

Related

IFS read not getting executed completely when using commands over remote in linux

I am reading a file through a script using the below method and storing it in myArray
while IFS=$'\t' read -r -a myArray
do
"do something"
done < file.txt
echo "ALL DONE"
Now in the "do something" area I am using some commands over ssh
ssh user#$SERVER "some command"
But the issue is after executing this for the 1st line of file.txt, the script stops reading the file further and skips to next step that is I get the output
ALL DONE
But instead of commands over ssh I use local commands the scripts run file. I am not sure why this is happening. Can someone please suggest what I need to do?
You'll have to try giving the -n flag to ssh, from the manpage:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin). This must be used when ssh is run in the background. A
common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will
start an emacs on shadows.cs.hut.fi, and the X11 connection will
be automatically forwarded over an encrypted channel. The ssh
program will be put in the background. (This does not work if
ssh needs to ask for a password or passphrase; see also the -f
option.)

How can I start an ssh session with a script without redirecting stdin?

I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.

expect: launching scp after sftp

I could really use some help. I'm still pretty new with expect. I need to launch a scp command directly after I run sftp.
I got the first portion of this script working, my main concern is the bottom portion. I really need to launch a command after this command completes. I'd rather be able to spawn another command than, hack something up like piping this with a sleep command and running it after 10 s or something weird.
Any suggestions are greatly appreciated!
spawn sftp user#host
expect "password: "
send "123\r"
expect "$ "
sleep 2
send "cd mydir\r"
expect "$ "
sleep 2
send "get somefile\r"
expect "$ "
sleep 2
send "bye\r"
expect "$ "
sleep 2
spawn scp somefile user2#host2:/home/user2/
sleep 2
So i figured out I can actually get this to launch the subprocess if I use "exec" instead of spawn.. in other words:
exec scp somefile user2#host2:/home/user2/
the only problem? It prompts me for a password! This shouldn't happen, I already have the ssh-keys installed on both systems. (In other words, if I run the scp command from the host I'm running this expect script on, it will run without prompting me for a password). The system I'm trying to scp to, must be recognizing this newly spawned process as a new host, because its not picking up my ssh-key. Any ideas?
BTW, I apologize I haven't actually posted a "working" script, I can't really do that without comprimising the security of this server. I hope that doesn't detract from anyones ability to assist me.
I think the problem lies with me not terminating the initially spawned process. I don't understand expect enough to do it properly. If I try "close" or "eof", it simply kills the entire script, which I don't want to do just yet (because I still need to scp the file to the second host).
Ensure that your SSH private key is loaded into an agent, and that the environment variables pointing to that agent are active in the session where you're calling scp.
[[ $SSH_AUTH_SOCK ]] || { # if no agent already running...
eval "$(ssh-agent -s)" # ...then start one...
ssh-add /path/to/your/ssh/key # ...load your key...
started_ssh_agent=1 # and flag that we started it ourselves
}
# ...put your script here...
[[ $started_ssh_agent ]] && { # if we started the agent ourselves...
eval "$(ssh-agent -s -k)" # ...then clean up nicely when done.
}
As an aside, I'd strongly suggest replacing the code given in the question with something like the following:
lftp -u user,123 -e 'get /mydir/somefile -o localfile' sftp://host </dev/null
lftp scp://user2#host2 -e 'put localfile -o /home/user2/somefile' </dev/null
Each connection handled in one line, and no silliness messing around with expect.

bash script to ssh multiple servers in a Loop and issue commands

I have a text file in which I have a list of servers. I'm trying to read the server one by one from the file, SSH in the server and execute ls to see the directory contents. My loop runs just once when I run the SSH command, however, for SCP it runs for all servers in the text file and exits, I want the loop to run till the end of text file for SSH. Following is my bash script, how can I make it run for all the servers in the text file while doing SSH?
#!/bin/bash
while read line
do
name=$line
ssh abc_def#$line "hostname; ls;"
# scp /home/zahaib/nodes/fpl_* abc_def#$line:/home/abc_def/
done < $1
I run the script as $ ./script.sh hostnames.txt
The problem with this code is that ssh starts reading data from stdin, which you intended for read line. You can tell ssh to read from something else instead, like /dev/null, to avoid eating all the other hostnames.
#!/bin/bash
while read line
do
ssh abc_def#"$line" "hostname; ls;" < /dev/null
done < "$1"
A little more direct is to use the -n flag, which tells ssh not to read from standard input.
Change your loop to a for loop:
for server in $(cat hostnames.txt); do
# do your stuff here
done
It's not parallel ssh but it works.
I open-sourced a command line tool called Overcast to make this sort of thing easier.
First you import your servers:
overcast import server.01 --ip=1.1.1.1 --ssh-key=/path/to/key
overcast import server.02 --ip=1.1.1.2 --ssh-key=/path/to/key
Once that's done you can run commands across them using wildcards, like so:
overcast run server.* hostname "ls -Al" ./scriptfile
overcast push server.* /home/zahaib/nodes/fpl_* /home/abc_def/

how to script commands that will be executed on a device connected via ssh?

So, I've established a connection via ssh to a remote machine; and now what I would like to do is to execute few commands, grab some files and copy them back to my host machine.
I am aware that I can run
ssh user#host "command1; command2;....command_n"
and then close the connection, but how can I do the same without use the aforememtioned syntax? I have a lot of complex commands that has a bunch of quote and characters that would be a mess to escape.
Thanks!
My immediate thought is why not create a script and push it over to the remote machine to have it run locally in a text file? If you can't for whatever reason, I fiddled around with this and I think you could probably do well with a HEREDOC:
ssh -t jane#stackoverflow.com bash << 'EOF'
command 1 ...
command 2 ...
command 3 ...
EOF
and it seems to do the right thing. Play with your heredoc to keep your quotes safe, but it will get tricky. The only other thing I can offer (and I totally don't recomend this) is you could use a toy like perl to read and write to the ssh process like so:
open S, "| ssh -i ~/.ssh/host_dsa -t jane#stackoverflow.com bash";
print S "date\n"; # and so on
but this is a really crummy way to go about things. Note that you can do this in other languages.
Instead of the shell use some scripting language (Perl, Python, Ruby, etc.) and some module that takes care of the ugly work. For example:
#!/usr/bin/perl
use Net::OpenSSH;
my $ssh = Net::OpenSSH->new($host, user => $user);
$ssh->system('echo', 'Net::Open$$H', 'Quot%$', 'Th|s', '>For', 'You!');
$ssh->system({stdout_file => '/tmp/ls.out'}, 'ls');
$ssh->scp_put($local_path, $remote_path);
my $out = $ssh->capture("find /etc");
From here: Can I ssh somewhere, run some commands, and then leave myself a prompt?
The use of an expect script seems pretty straightforward... Copied from the above link for convenience, not mine, but I found it very useful.
#!/usr/bin/expect -f
spawn ssh $argv
send "export V=hello\n"
send "export W=world\n"
send "echo \$V \$W\n"
interact
I'm guessing a line like
send "scp -Cpvr someLocalFileOrDirectory you#10.10.10.10/home/you
would get you your files back...
and then:
send "exit"
would terminate the session - or you could end with interact and type in the exit yourself..

Resources