I'm attempting to make a system which automatically copies files from one server to many servers. As part of that I'm using rsync and installing SSH keys, which works correctly.
My problem is that when it attempts to connect to a new server for the first time it will ask for a confirmation. Is there a way to automatically accept?
Example command/output:
rsync -v -e ssh * root#someip:/data/
The authenticity of host 'someip (someip)' can't be established.
RSA key fingerprint is somerandomrsakey.
Are you sure you want to continue connecting (yes/no)? yes
You can add this host's key to known_hosts beforehand like this:
ssh-keyscan $someip >> ~/.ssh/known_hosts
If they genuinely are new hosts, and you can't add the keys to known_hosts beforehand (see York.Sar's answer), then you can use this option:
-e "ssh -o StrictHostKeyChecking=no"
I know that this was asked 3 years ago, but this was at the top of my google search and I was unable to get either of these solutions in the middle of a Vagrant script to work correctly for me. So I wanted to put here the method that I found somewhere else.
The solution there talks about updating the ~/.ssh/config or /etc/ssh/ssh_config file with the following blocks of code.
To disable host key checking for a particular host (e.g., remote_host.com):
Host remote_host.com
StrictHostKeyChecking no
To turn off host key checking for all hosts you connect to:
Host *
StrictHostKeyChecking no
To avoid host key verification, and not use known_hosts file for 192.168.1.* subnet:
Host 192.168.0.*
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
I hope this helps someone else who runs into this issue.
Related
I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.
In a nutshell, after deleting then recreating new global ssh keys on a managed host as part of an ansible play, the shared ssh keys between the controller and the host break. I would like to know a superior method to "fix" this issue and regain the original ssh key trust using ansible itself. Unfortunately this will require some explanation.
Basically as a start, right now, I don't have ansible set up when a new image is deployed. To remedy that, I have created a bash script, utilizing expect which nicely and neatly does 2 things on that new managed host:
Creates an ansible account with appropriate sudo permissions
Creates an ssh key pair between the controller and the controller and the managed host.
That's it, and that's all, however it does require manual input at this time as to the IP of the host to be run on. We now have a desired state from which ansible works well via ssh. However it seems cumbersome at 328 lines of code to check and do this procedure, more on this later.
The issue starts, due to the fact that the host/server is deployed from an image, there is a need to recreate the global keys on each so that they do not have the same set. The fix for this part of that issue is a simple 2 steps:
Find and delete all ^ssh_host_. files in the directory /etc/ssh/
Run the command: /usr/bin/ssh-keygen -A to generate new global ssh keys.
We however now have a problem, once the current ssh connection is broken to the managed host, we can no longer connect to our managed hosts as our known_hosts file on the controller now have keys that don't match. If you do nothing else, you get a prompt again to verify the remote key as it has "changed" and you can't continue until you do. (Stopping all playbooks from functioning) OR if you try to clear the IP out of the known_hosts file on the controller and put it back in, you get the lovely below message:
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! ***SNIP*** You can use following command to remove the offending key:\r\nssh-keygen -R 10.200.5.4 -f /home/ansible/.ssh/known_hosts\r\nECDSA host key for 10.200.5.4 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
So now I have an issue, and there must be a few commands which I can utilize with ssh-keygen, and/or ssh-keyscan to fix this mess cleanly. However for the life of me I can't figure it out. My only recourse now is to re-run the bash script which initially sets this all up, and replace everything on the controller/host sshkey wise. This seems like overkill, I can't possibly believe that is necessary.
My only hope now is that someone else has an idea how to solve this cleanly and permanently without manual intervention. Otherwise, the only thing I can do is set the ansible_ssh_common_args: "-o StrictHostKeyChecking=no"fact and run the commands my script does but only in playbook form. I can't believe there aren't any modules which can accomplish this. I tried the known_host module, but either I don't know how to use it properly, or it doesn't have this functionality. (Also it has the annoying property of changing my known_hosts file to root ownership, which I must then change back.)
If anyone can help that would be fantastic! Thanks in advance!
The below is not strictly needed as it's extra text clogging up the works, but it does illustrate how the bash script fixes this issue and maybe give some insight on a better solution:
In short, it generates an ssh public and private key, attaches the hostname to them, creates an ssh config identity file using a heredocs population method, puts them in the proper spots, and then copies the public key over to mangaged host in question.
The code snipits are below to show how this is accomplished. This is not the entire script just relevant parts:
#HOMEDIR is /home/ansible This host is the IP of managed host in the run.
#THISHOST is the IP of the managed host in question. Yes, we ONLY use IP's, there is no DNS.
cd "$HOMEDIR"
rm -f $HOMEDIR/.ssh/id_rsa
ssh-keygen -t rsa -f "$HOMEDIR"/.ssh/id_rsa -q -P ""
sudo mkdir -p "$HOMEDIR"/.ssh/rsa_inventory && sudo chown ansible:users "$HOMEDIR"/.ssh/rsa_inventory
cp -p "$HOMEDIR"/.ssh/id_rsa "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa
cp -p "$HOMEDIR"/.ssh/id_rsa.pub "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa.pub
#Heredocs implementation of the ssh config identity file:
cat <<EOT >> /home/ansible/.ssh/config
Host $THISHOST $THISHOST
HostName $THISHOST
IdentityFile ~/.ssh/rsa_inventory/${THISHOST}-id_rsa
User ansible
EOT
#Define the variable earlier before the expect script is run so it makes sense in next snipit:
ssh_key=$( cat "$HOMEDIR"/.ssh/id_rsa.pub )
#Snipit in except script where it echos over the public ssh key to the managed host from the controller.
send "sudo echo '"$ssh_key"' >> /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
send "sudo chmod 644 /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
#etc etc, so on and so forth properly setting attributes on this file. ```
Now things work with passwordless ssh as they should. Until they are re-ruined by the global ssh key replacement.
I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.
I have a MacBook and by mistake I tried to setup password less ssh access for GitHub and definitely messed things up. I am new to Macs and I have no idea how to solve the below.
When I try to establish a ssh connection, I get the following
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:DrvZ0osXO8YCnPwEaWyiGKf/SxQGcCZBoNcPrhEdKmI.
Please contact your system administrator.
Add correct host key in /Users/rakeshravi/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/rakeshravi/.ssh/known_hosts:4
RSA host key for rivanna.hpc.virginia.edu has changed and you have requested strict checking.
lost connection
From what I found in online forums, I was trying the following command but to no avail.
ssh-keygen -R hostname
I get the following as response.
do_known_hosts: hostkeys_foreach failed: No such file or directory
You can open with your favorite editor file /Users/rakeshravi/.ssh/known_hostsand delete line 4 and save the file. And next time you try to connect to the host you will be asked to add the new key to known_hosts
Add the option -o "StrictHostKeyChecking no". This will suppress any checking. Example:
ssh root#localhost -o "StrictHostKeyChecking no"
My two cents.
Other answers are fine, but here is what I did and worked fine.
Completely got rid of the following two files.
For me I am just playing with vms on azure, so its fine for me.
I am new to EC2. I created my security credentials from this site:
http://paulstamatiou.com/how-to-getting-started-with-amazon-ec2
It worked great, I rebooted and now when I try to connect I get a login/password prompt. (Which I never set up.) After several attempts I get this error:
Permission denied (publickey,gssapi-with-mic).
What am I doing wrong?
Two possibilities I can think of, although they are both mentioned in the link you referenced:
You're not specifying the correct SSH keypair file or user name in the ssh command you're using to log into the server:
ssh -i [full path to keypair file] root#[EC2 instance hostname or IP address]
You don't have the correct permissions on the keypair file; you should use
chmod 600 [keypair file]
to ensure that only you can read or write the file.
Try using the -v option with ssh to get more info on where exactly it's failing, and post back here if you''d like more help.
[Update]: OK, so this is what you should have seen if everything was set up properly:
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: Next authentication method: publickey
debug1: Trying private key: ec2-keypair
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Are you running the ssh command from the directory containing the ec2-keypair file ? If so, try specifying -i ./ec2-keypair just to eliminate path problems. Also check "ls -l [full path to ec2-keypair]" file and make sure the permissions are 600 (displayed as rw-------). If none of that works, I'd suspect the contents of the keypair file, so try recreating it using the steps in your link.
The key for me to be able to connect was to use the "ec2-user" user rather than root. I.e.:
ssh -i [full path to keypair file] ec2-user#[EC2 instance hostname or IP address]
+1
I noticed that for some AMIs like Amazon Linux, ec2-user#xxx.XX.XX.XXX would work. But for an ubuntu image, I had to use ubuntu# instead. It was never a problem with the .pem, just with the user name.
In my case it's because the permission for my home directory is 775, and SSH is not happy about it. It should work after executing:
server$ chmod go-w ~/
server$ chmod 700 ~/.ssh
server$ chmod 600 ~/.ssh/authorized_keys
I had very similar experience this afternoon. I was setting up django on EC2, and suddenly I cannot SSH into the box anymore. Glad I still had an active connection, so I modified /etc/ssh/sshd_config to set:
PasswordAuthentication yes
and set password for ec2-user, then I can login by entering the password.
However, after some googling I found this thread: http://ubuntuforums.org/showthread.php?t=577279. It turned out that during my setup of django I changed the permission for my home directory, and SSH is very strict about this. So the file permission must be set correctly.
I had met this problem too.And I found that happend beacuse I forgot to add the user-name before the host name:
like this:
ssh -i test.pem ec2-32-122-42-91.us-west-2.compute.amazonaws.com
and I add the user name:
ssh -i test.pem ec2-user#ec2-32-122-42-91.us-west-2.compute.amazonaws.com
it works!
Tagging on to mecca831's answer:
ssh -v -i generated-key.pem ec2-user#11.11.11.11
[ec2-user#ip-11.11.11.11 ~]$ sudo passwd ec2-user
newpassword
newpassword
[ec2-user#ip-11.11.11.11 ~]$ sudo vi /etc/ssh/sshd_config
Modify the file as follows:
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
# EC2 uses keys for remote access
#PasswordAuthentication no
Save
[ec2-user#ip-11.11.11.11 ~]$ sudo service sshd stop
[ec2-user#ip-11.11.11.11 ~]$ sudo service sshd start
you should be able to exit and ssh in as follows:
ssh ec2-user#11.11.11.11
and be prompted for password no longer needing the key.
Are you sure you have used the right instance? I ran into this problem and realized that something like 4 of the ubuntu instances i tried did not have SSH servers installed on them.
For a list of good servers see "Getting the images" about half way down. Sounds like you may be using something else... the default username is ubuntu on these images.
https://help.ubuntu.com/community/EC2StartersGuide
I was able to login using ec2-user
ssh -i [full path to keypair file] ec2-user#[EC2 instance hostname or IP address]
After about a half hour of searching and trying to debug this I was able to figure it out. My situation involved me using the same pem file for two different ec2 instance and it working for one and not the other.
My first instance it worked on was the standard aws linux ami amzn-ami-hvm-2014.03.2.x86_64-ebs. I simply used
ssh -i mypemfile.pem ec2-user#myec2ipaddress
and it worked.
I then launched a fedora instance Fedora-x86_64-19-20140407-sda and tried the same command but kept getting:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
After changing my username from ec2-user to fedora it worked!
ssh -i mypemfile.pem fedora#myec2address
None of the above helped me, but futzing with the user seemed like it had promise. For my config using 'ubuntu' was right.....
ssh -i [full path to keypair file] ubuntu#[EC2 instance hostname or IP address]
I recommend against setting a password as some other answers suggest. Using the key file is both safer (no one can guess your passwords) and more convenient (once you set up a config file). Here's a basic ~/.ssh/config:
Host my-ec2-server
HostName 11.11.11.11
User ec2-user
IdentityFile /path/to/generated-key.pem
Now you can just type ssh my-ec2-server and you're in! And as also mentioned in other answers, use -v to get extra info when your connection isn't working.
If the issue is consistent and happened about 10-15 times in a row even after changing file permissions to 400 or 600, then it is most certainly something is wrong on the ec2 instance, so to make sure:
Check the logs when you try to ssh to the instance by adding -v at the end and see either it gives out anything specific.
Make sure you use the correct name for ssh, like Ubuntu. Perhaps that depends on Linux distribution and users you added and either you've given permission for "root user" ssh.
Then if nothing helps, follow the documentation here https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html#TroubleshootingInstancesConnectingMindTerm
to fix that. It helped in my case, and it happened because of messed up directories/files permissions.
If you have a PPK file working on a PC, then export it as OpenSSH file using puttygen.exe for PC and use that on Mac (any Unix machine).
I was getting the same error --
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: Next authentication method: publickey
debug1: Trying private key: ec2-keypair
debug1: read PEM private key done: type RSA
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: No more authentication methods to try.
Permission denied (publickey,gssapi-with-mic)
As I was using a PPK file on Windows, I followed the steps as described above and Bingo!
$ ssh -i ec2-openssh-key root#ec2-instance-ip
I had the same problem using the AWS Toolkit for Eclipse. I created the Getting Started instance OK and opened a shell. However, the user was set to ec2-user. I used the Open Shell As... command and set the user to root. Then it worked.
Had a similar issue. Here are the steps used to setup SSH keys and forwarding on the Mac. Made these notes for myself - may help someone... check against your config.
The assumption here is there are no keys setup. If you already have the keys setup skip this section.
$ ssh‐keygen ‐t rsa ‐b 4096
Generating public/private rsa key pair.
Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]
Enter passphrase (empty for no passphrase): [Type a passphrase]
Enter same passphrase again: [Type passphrase again]
Modify ~/.ssh/config adding the entry for the key file:
~/.ssh/config should look similar to:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa
Store the private key in the keychain:
$ ssh‐add ‐K ~/.ssh/id_rsa
Go test it now with: ssh -A username#yourhostname
Should forward your key to yourhostname. Assuming your keys are added on you should connect without issue.
I was getting this error when I was trying to ssh into an ec2 instance on the private subnet from the bastion, to fix this issue, you've to run (ssh-add -K) as follow.
Step 1: run "chmod 400 myEC2Key.pem"
Step 2: run "ssh-add -K ./myEC2Key.pem" on your local machine
Step 3: ssh -i myEC2Key.pem root#ec2-107-20-4-100.compute-1.amazonaws.com
Step 4: Now try to ssh to EC2 instance that is on a private subnet without specifying the key, for example, try ssh ec2-user#ipaddress.
Hope this will help.
Note: This solution is for Mac.