How to edit ssh bash complete for config file entries? - bash

So I am aware one can edit bash tab completion scripts with functions for a variety of behaviors
I have configured a long list of aws hosts on my .ssh/config, written as dot sepparated names, like so:
Host aws.<stack_name>.<instance_name>
HostName X.X.X.X
by default ssh bash completion scripts display the full list of hosts declared in the config file, and this is pretty good already. But I would like to add to this behavior, which currently displays the list like so:
$ ssh aws.[tab][tab]
aws.stack1.inst1 aws.stack2.inst1
aws.stack1.inst2 aws.stack2.inst2
aws.stack1.inst3 aws.stack3.inst1
aws.stack1.inst4 aws.stack4.inst1
so that I can do the following:
$ ssh aws.[tab][tab]
stack1 stack2 stack3
$ ssh aws.stack1.[tab][tab]
inst1 inst2 inst3 inst4
But I have not been able to figure out how, and where, to add this functionality by navigating through available bash_completion scripts.
I don't want to overwrite the current ssh completion, just extend its behavior when ssh is followed by aws
How/Where can/should I implement this behavior?

Related

How to repair sshkey pairs after recreating global ssh keys with Ansible

In a nutshell, after deleting then recreating new global ssh keys on a managed host as part of an ansible play, the shared ssh keys between the controller and the host break. I would like to know a superior method to "fix" this issue and regain the original ssh key trust using ansible itself. Unfortunately this will require some explanation.
Basically as a start, right now, I don't have ansible set up when a new image is deployed. To remedy that, I have created a bash script, utilizing expect which nicely and neatly does 2 things on that new managed host:
Creates an ansible account with appropriate sudo permissions
Creates an ssh key pair between the controller and the controller and the managed host.
That's it, and that's all, however it does require manual input at this time as to the IP of the host to be run on. We now have a desired state from which ansible works well via ssh. However it seems cumbersome at 328 lines of code to check and do this procedure, more on this later.
The issue starts, due to the fact that the host/server is deployed from an image, there is a need to recreate the global keys on each so that they do not have the same set. The fix for this part of that issue is a simple 2 steps:
Find and delete all ^ssh_host_. files in the directory /etc/ssh/
Run the command: /usr/bin/ssh-keygen -A to generate new global ssh keys.
We however now have a problem, once the current ssh connection is broken to the managed host, we can no longer connect to our managed hosts as our known_hosts file on the controller now have keys that don't match. If you do nothing else, you get a prompt again to verify the remote key as it has "changed" and you can't continue until you do. (Stopping all playbooks from functioning) OR if you try to clear the IP out of the known_hosts file on the controller and put it back in, you get the lovely below message:
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! ***SNIP*** You can use following command to remove the offending key:\r\nssh-keygen -R 10.200.5.4 -f /home/ansible/.ssh/known_hosts\r\nECDSA host key for 10.200.5.4 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
So now I have an issue, and there must be a few commands which I can utilize with ssh-keygen, and/or ssh-keyscan to fix this mess cleanly. However for the life of me I can't figure it out. My only recourse now is to re-run the bash script which initially sets this all up, and replace everything on the controller/host sshkey wise. This seems like overkill, I can't possibly believe that is necessary.
My only hope now is that someone else has an idea how to solve this cleanly and permanently without manual intervention. Otherwise, the only thing I can do is set the ansible_ssh_common_args: "-o StrictHostKeyChecking=no"fact and run the commands my script does but only in playbook form. I can't believe there aren't any modules which can accomplish this. I tried the known_host module, but either I don't know how to use it properly, or it doesn't have this functionality. (Also it has the annoying property of changing my known_hosts file to root ownership, which I must then change back.)
If anyone can help that would be fantastic! Thanks in advance!
The below is not strictly needed as it's extra text clogging up the works, but it does illustrate how the bash script fixes this issue and maybe give some insight on a better solution:
In short, it generates an ssh public and private key, attaches the hostname to them, creates an ssh config identity file using a heredocs population method, puts them in the proper spots, and then copies the public key over to mangaged host in question.
The code snipits are below to show how this is accomplished. This is not the entire script just relevant parts:
#HOMEDIR is /home/ansible This host is the IP of managed host in the run.
#THISHOST is the IP of the managed host in question. Yes, we ONLY use IP's, there is no DNS.
cd "$HOMEDIR"
rm -f $HOMEDIR/.ssh/id_rsa
ssh-keygen -t rsa -f "$HOMEDIR"/.ssh/id_rsa -q -P ""
sudo mkdir -p "$HOMEDIR"/.ssh/rsa_inventory && sudo chown ansible:users "$HOMEDIR"/.ssh/rsa_inventory
cp -p "$HOMEDIR"/.ssh/id_rsa "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa
cp -p "$HOMEDIR"/.ssh/id_rsa.pub "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa.pub
#Heredocs implementation of the ssh config identity file:
cat <<EOT >> /home/ansible/.ssh/config
Host $THISHOST $THISHOST
HostName $THISHOST
IdentityFile ~/.ssh/rsa_inventory/${THISHOST}-id_rsa
User ansible
EOT
#Define the variable earlier before the expect script is run so it makes sense in next snipit:
ssh_key=$( cat "$HOMEDIR"/.ssh/id_rsa.pub )
#Snipit in except script where it echos over the public ssh key to the managed host from the controller.
send "sudo echo '"$ssh_key"' >> /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
send "sudo chmod 644 /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
#etc etc, so on and so forth properly setting attributes on this file. ```
Now things work with passwordless ssh as they should. Until they are re-ruined by the global ssh key replacement.

SSH config file limited to 100 identity files

I have to connect to more than 100 machines through SSH. I made an script to make all the connections and perform the changes that i need. The problem is that i cant type the password every time i execute the script for each of the remote machines. Then, I found out that I could create a file in the /root/.ssh/ directory named config where I can store lines like this:
IdentityFile /root/.ssh/id_rsa_XXXX
The key pair is saved also in /root/.ssh/ but the problem is that there is a limit of 100 identity files that I can write in the config file.
Do u know if there's a workaround to make this possible?
Thanks to all, first question here! :)
First of all, if you have 100 servers to connect and 100 keys, you are doing it wrong. You can reuse the public key for other servers if you make sure the private key is safe.
If you are trying to load all the keys to ssh at once, you are doing it also wrong. The ssh config has a Host keyword, which can be made to filter which key is supposed to be used on which server. And I advise you to use it. Otherwise ssh will not know what key to use to which server and it also overcomes the limit.
Do you have separate ssh keys for each and every server? You could bundle them (one key for each type/function of server). Then you wouldn't need to specify each inside a config file.
Another way around this, would be to call the key from the command line, instead of a config file like so:
ssh -i /root/.ssh/id_rsa_XXX -l user.name server.example.com
If you do it carefully, you could create /root/.ssh/hostname where hostname is the actual hostname of the server you want to connect to. For example:
/root/.ssh/server.example.com
You could then script (BASH) like so (assuming you call the script dossh.sh):
key_and_hostname=$1
ssh -i /root/.ssh/${key_and_hostname} -l user.name ${key_and_hostname}
call the script like:
dossh.sh server.example.com

How to autocomplete or use abbreviations in bash command

Everyday I have to transfer files to many (6) AWS EC2 instances via SCP, from my bash terminal, on Linux Debian 8. I have to hand write:
i) files to transfer;
ii) destination folder;
iii) url to my .pem certificate;
iv) openning urls on the servers: ec2-user#XX.XX.XXX....
As a lazy programmer I want to use some kind of abbreviation in order to type less, having in mind that iii) and iv) are the same generally.
First I wrote a bash script which asks me for files to be transferred and folder on my server, however it takes more time due to the fact that prompts for an answer every time I transfer. Instead, I can just use arrow to repeat the bash command previously introduced.
Here is an example of the current command I use to run:
scp -i /pem/aws.pem file_to_upload.txt ec2-user#XX-XX-XX-XX.amazonwas.com:/var/www/html/folder/
the following strings require repeated typing:
scp -i /pem/aws.pem
ec2-user#..... // this is a large string of 64 characters.
I'd love some sort of placeholder, where I just type the path to files to upload and the remote folder.
How can I autocomplete or use abbreviations, like VIM to insert the same recurrent text in a bash command?
All the solutions already posted work, but they are really ugly and not the way how it should be. Remember, we still have ~/.ssh/config?
scp -i /pem/aws.pem file_to_upload.txt ec2-user#XX-XX-XX-XX.amazonwas.com:/var/www/html/folder/
will boil down to
scp file_to_upload.txt am:/var/www/html/folder/
if you set up your ~/.ssh/config:
Host am:
Hostname XX-XX-XX-XX.amazonwas.com
User ec2-user
IdentityFile /pem/aws.pem
There your local file gets auto-completed and remote directory also if you don't use passphrase (or set up ControlMaster and ControlPersist options - this will even be fast!).
Sounds like you could take advantage of using some bash aliases in your bashrc or bash_profile.
Take a look at my .bashrc file on githut
I often need to know what is my ip address. One example of a bash alias is my showip alias.
alias showip='ifconfig | grep "inet" | grep -v 127.0.0.1'
I have also created aliases to ssh into my Linux boxes. You could create a bash alias that should solve your problem.
Copying directory recursively from host:
scp -r user#host:/directory/SourceFolder TargetFolder
NOTE: If the host is using a port other than port 22, you can specify it with the -P option:
scp -P 2222 user#host:/directory/SourceFile TargetFile
When you are connecting to your bash terminal with a ssh program, see what keyboard mappings your ssh supports.
Other options, based on
echo "This is a line I do not want to type twice"
Look how you can use the bash history
!!
^twice^two times^
You can put your abbreviations in shell variables
no2="This is a line I do not want to type twice"
echo "${no2}"
And you can use an alias
my2='echo "This is a line I do not want to type twice"'
my2
The shell variables and aliases can be put in ${HOME}/.bashrc.

Vagrant: How can you run scripts on the host via commands in the guest shell?

It's possible to open ports, network files, and there are plug-ins that allow for running guest or host [shell] commands during Vagrant's Provisioning process.
What I'd like to do is be able to (perhaps through a bash alias) run a command in the Vagrant guest/VM, and have this execute a command on the host, ideally with a variable being passed on the command line.
Example: In my host I run the Atom editor (same applies to TextMate, whatever). If I want to work on a shared file in the VM, I have to manually open that file from over in the host, either by opening it directly in the editor, or running the 'atom filename' shell command.
I want parity, so while inside the VM, I can run 'atom filename', and this will pass the filename to the 'atom $1' script outside of the VM, in the host, and open it in my host editor (Atom).
Note: We use Salt for Vagrant Provisioning, and NFS for mounting, for what it's worth. And of course, ssh with key.
Bonus question: Making this work with .gitconfig as its merge conflict editor (should just work, if the former is possible, right?).
This is a very interesting use case that I haven't heard before. There isn't a native method of handling this in Vagrant, but this functionality was added to Packer in the form of a local shell provisioner. You could open a GitHub issue on the Vagrant project and propose the same feature. Double check the current list of issues, though, because it's possible someone has beaten you to it.
In the meantime, though, you do have a workaround if you're determined to do this...
Create an ssh key pair on your host.
Use Salt to add the private key in /home/vagrant/.ssh on the box.
Use a shell provisioner to run remote ssh commands on the host from the guest.
These commands would take the form of...
ssh username#192.168.0.1 "ls -l ~"
In my experience, the 192.168.0.1 IP always points back to the host, but your mileage may vary. I'm not a networking expert by any means.
I hope this works for you and I think a local shell provisioner for Vagrant would be a reasonable feature.

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources