I have to connect to more than 100 machines through SSH. I made an script to make all the connections and perform the changes that i need. The problem is that i cant type the password every time i execute the script for each of the remote machines. Then, I found out that I could create a file in the /root/.ssh/ directory named config where I can store lines like this:
IdentityFile /root/.ssh/id_rsa_XXXX
The key pair is saved also in /root/.ssh/ but the problem is that there is a limit of 100 identity files that I can write in the config file.
Do u know if there's a workaround to make this possible?
Thanks to all, first question here! :)
First of all, if you have 100 servers to connect and 100 keys, you are doing it wrong. You can reuse the public key for other servers if you make sure the private key is safe.
If you are trying to load all the keys to ssh at once, you are doing it also wrong. The ssh config has a Host keyword, which can be made to filter which key is supposed to be used on which server. And I advise you to use it. Otherwise ssh will not know what key to use to which server and it also overcomes the limit.
Do you have separate ssh keys for each and every server? You could bundle them (one key for each type/function of server). Then you wouldn't need to specify each inside a config file.
Another way around this, would be to call the key from the command line, instead of a config file like so:
ssh -i /root/.ssh/id_rsa_XXX -l user.name server.example.com
If you do it carefully, you could create /root/.ssh/hostname where hostname is the actual hostname of the server you want to connect to. For example:
/root/.ssh/server.example.com
You could then script (BASH) like so (assuming you call the script dossh.sh):
key_and_hostname=$1
ssh -i /root/.ssh/${key_and_hostname} -l user.name ${key_and_hostname}
call the script like:
dossh.sh server.example.com
Related
In a nutshell, after deleting then recreating new global ssh keys on a managed host as part of an ansible play, the shared ssh keys between the controller and the host break. I would like to know a superior method to "fix" this issue and regain the original ssh key trust using ansible itself. Unfortunately this will require some explanation.
Basically as a start, right now, I don't have ansible set up when a new image is deployed. To remedy that, I have created a bash script, utilizing expect which nicely and neatly does 2 things on that new managed host:
Creates an ansible account with appropriate sudo permissions
Creates an ssh key pair between the controller and the controller and the managed host.
That's it, and that's all, however it does require manual input at this time as to the IP of the host to be run on. We now have a desired state from which ansible works well via ssh. However it seems cumbersome at 328 lines of code to check and do this procedure, more on this later.
The issue starts, due to the fact that the host/server is deployed from an image, there is a need to recreate the global keys on each so that they do not have the same set. The fix for this part of that issue is a simple 2 steps:
Find and delete all ^ssh_host_. files in the directory /etc/ssh/
Run the command: /usr/bin/ssh-keygen -A to generate new global ssh keys.
We however now have a problem, once the current ssh connection is broken to the managed host, we can no longer connect to our managed hosts as our known_hosts file on the controller now have keys that don't match. If you do nothing else, you get a prompt again to verify the remote key as it has "changed" and you can't continue until you do. (Stopping all playbooks from functioning) OR if you try to clear the IP out of the known_hosts file on the controller and put it back in, you get the lovely below message:
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! ***SNIP*** You can use following command to remove the offending key:\r\nssh-keygen -R 10.200.5.4 -f /home/ansible/.ssh/known_hosts\r\nECDSA host key for 10.200.5.4 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
So now I have an issue, and there must be a few commands which I can utilize with ssh-keygen, and/or ssh-keyscan to fix this mess cleanly. However for the life of me I can't figure it out. My only recourse now is to re-run the bash script which initially sets this all up, and replace everything on the controller/host sshkey wise. This seems like overkill, I can't possibly believe that is necessary.
My only hope now is that someone else has an idea how to solve this cleanly and permanently without manual intervention. Otherwise, the only thing I can do is set the ansible_ssh_common_args: "-o StrictHostKeyChecking=no"fact and run the commands my script does but only in playbook form. I can't believe there aren't any modules which can accomplish this. I tried the known_host module, but either I don't know how to use it properly, or it doesn't have this functionality. (Also it has the annoying property of changing my known_hosts file to root ownership, which I must then change back.)
If anyone can help that would be fantastic! Thanks in advance!
The below is not strictly needed as it's extra text clogging up the works, but it does illustrate how the bash script fixes this issue and maybe give some insight on a better solution:
In short, it generates an ssh public and private key, attaches the hostname to them, creates an ssh config identity file using a heredocs population method, puts them in the proper spots, and then copies the public key over to mangaged host in question.
The code snipits are below to show how this is accomplished. This is not the entire script just relevant parts:
#HOMEDIR is /home/ansible This host is the IP of managed host in the run.
#THISHOST is the IP of the managed host in question. Yes, we ONLY use IP's, there is no DNS.
cd "$HOMEDIR"
rm -f $HOMEDIR/.ssh/id_rsa
ssh-keygen -t rsa -f "$HOMEDIR"/.ssh/id_rsa -q -P ""
sudo mkdir -p "$HOMEDIR"/.ssh/rsa_inventory && sudo chown ansible:users "$HOMEDIR"/.ssh/rsa_inventory
cp -p "$HOMEDIR"/.ssh/id_rsa "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa
cp -p "$HOMEDIR"/.ssh/id_rsa.pub "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa.pub
#Heredocs implementation of the ssh config identity file:
cat <<EOT >> /home/ansible/.ssh/config
Host $THISHOST $THISHOST
HostName $THISHOST
IdentityFile ~/.ssh/rsa_inventory/${THISHOST}-id_rsa
User ansible
EOT
#Define the variable earlier before the expect script is run so it makes sense in next snipit:
ssh_key=$( cat "$HOMEDIR"/.ssh/id_rsa.pub )
#Snipit in except script where it echos over the public ssh key to the managed host from the controller.
send "sudo echo '"$ssh_key"' >> /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
send "sudo chmod 644 /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
#etc etc, so on and so forth properly setting attributes on this file. ```
Now things work with passwordless ssh as they should. Until they are re-ruined by the global ssh key replacement.
I have about 20 Macs on my network that always need fonts installed.
I have a folder location where I ask them to put the fonts they need synced to every machine (as to save time i will install the font on every machine so that if they move machines, i don't need to do it again)
at the moment I am just manually rsyncing the fonts from this server location to all the machines one by one using
rsync -avrP /server/fonts/ /Library/Fonts/
this requires me to ssh into every machine
is there a way i can script this using a hosts.txt file with the ips? the password is the same for every machine and i'd rather not type it 20 times. Security isn't an issue.
something that allows me to call the script and point it at a font i.e.
./install-font font.ttf
I've looked into scp but I don't see any example of specifying a password anywhere in the script.
cscp.sh
#!/bin/bash
while read host; do
scp $1 ${host}:
done
project-prod-web1
project-prod-web2
project-prod-web3
Usage
Copy file to multiple hosts:
cscp.sh file < hosts
But this asks me to type a password every time and doesn't specify the target location on the host.
I don't see any example of specifying a password anywhere in the script.
Use ssh-copy-id command to install your public key to each of these hosts. After that ssh and scp will use public-private key authentication without requiring you to enter the password.
I'm wondering how I would go about creating my own bash script to ssh to a server. I know it's lazy, but I would ideally want not to have to type out:
ssh username#server
And just have my own two letter command instead (i.e. no file extension, and executable from any directory).
Any help would be much appreciated. If it helps with specifying file paths etc, I am using Mac OS X.
You can set configs for ssh in file ~/.ssh/config:
Host dev
HostName mydom.example.com
User myname
Then, just type
$> ssh dev
And you're done. Also, you can add your public key to the file ~/.ssh/authorized_keys so you won't get prompted for your password every time you want to connect via ssh.
Use an alias.
For example: alias sv='ssh user#hostname', then you can simply type sv.
Be sure to put a copy of the aliases in your profile, otherwise they will disappear at the end of your session.
you could create an alias like this:
alias ss="ssh username#server" and write it into your .bash_profile. ".bash_profile" is a hidden file is located in your home directory. If .bash_profile doesn't exist yet (check by typing ls -a in your home directory), you can create it yourself.
The bash_profile file will be read and executed every time you open a new shell.
You can use ssh-argv0 to avoid typing ssh.
To do this, you need to create a link to ssh-argv0 with the name of the host you want to connect, including the user if needed. Then you can execute that link, and ssh will connect you to the host of the link name.
Example
Setup the link:
ln -s /usr/bin/ssh-argv0 ~/bin/my-server
/usr/bin/ssh-argv0 is the path of ssh-argv0 on my system, yours could be different, check with which ssh-argv0
I have put it in ~/bin/ to be able to execute it from any directory (in OS X you may need to add ~/bin/ manually to your path on .bash_profile)
my-server is the name of my server, and if needed to set the user, it would be user#my-server
Execute it:
my-server
Even more
You can also combine this with mogeb answer to configure your server connection, so that you can call it with a shorter name, and avoid to include the user even if it is different than on the local system.
Host serv
HostName my-server
User my-user
Port 22
then set a link to ssh-argv0 with the name serv, and connect to it with
serv
All the vms at work I need to ssh into are of a common format (stuff014.stuff.com) with differing numbers. Is there a quick way to connect to them without making a big ssh config file and without using alias?
(Replace <your_user_name> with your user name.)
#!/bin/bash
ssh <your_user_name>#stuff$1.stuff.com
The $1 is the first parameter given, so if this was named easyssh.sh and you needed to get to 014 do
./easyssh.sh 014
To make this even better add it to a folder on your PATH (or add the directory to your path, whichever suits your needs).
You wouldn't need a big config file. A minimal implementation only requires two lines.
host stuff*
HostName %h.stuff.com
Any host you try to connect to is matched against the host patterns in your config file, stopping at the first one that matches. The HostName directive uses the matched host (%h) to construct the actual host name to connect to.
Then you can abbreviate the host name when running ssh:
$ ssh stuff014
# Connects to stuff014.stuff.com
Is there a way to save the password of a ssh-connection inside an uri-link. AFAIK a uri can look like this username:password#domain/path. But the following example doesn't work on ubuntu:
ssh user:pass#domain/path
I always receive a "please enter password"-question. I know that it is not a quite secure way to save the password in plain text inside a link, but I have to work with other developers and what should I say... they are ex-Windows user, they don't like terminals and therefore I want to write a tiny shell script. this script should clone a remote git repo and create some specific stuff.
One click and I should do some magic!
You should use a ssh-key generated with ssh-keygen (man ssh-keygen). This is also available on the windows platform within the putty environment.
eval $(ssh-agent)
ssh-add ssh./yourkeyfilewithoutpassphrase
ssh user#sshserver "your remote command"
Befor you can use your ssh-key in the remotehost, you must insert the public key to the authorized_keys file. A convenient way is the command
ssh-copy-id -i ssh./yourkeyfilewithoutpassphrase.pub user#sshserver
or, if the key is already loaded by the ssh-agent
ssh-copy-id user#sshserver
After this point, you dont need any password for ssh connection to established remote hosts. You should use per user a different ssh-key, so you are able to enable and disable keys without bothering the other users.
You can't login with input password using ssh.
Another alternate way is setup a pair of ssh-keys, and login using ssh-key.
I follow the guide here: http://www.softwareprojects.com/resources/programming/t-ssh-no-password-without-any-private-keys-its-magi-1880.html