Related
I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.
I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.
I'm attempting to make a system which automatically copies files from one server to many servers. As part of that I'm using rsync and installing SSH keys, which works correctly.
My problem is that when it attempts to connect to a new server for the first time it will ask for a confirmation. Is there a way to automatically accept?
Example command/output:
rsync -v -e ssh * root#someip:/data/
The authenticity of host 'someip (someip)' can't be established.
RSA key fingerprint is somerandomrsakey.
Are you sure you want to continue connecting (yes/no)? yes
You can add this host's key to known_hosts beforehand like this:
ssh-keyscan $someip >> ~/.ssh/known_hosts
If they genuinely are new hosts, and you can't add the keys to known_hosts beforehand (see York.Sar's answer), then you can use this option:
-e "ssh -o StrictHostKeyChecking=no"
I know that this was asked 3 years ago, but this was at the top of my google search and I was unable to get either of these solutions in the middle of a Vagrant script to work correctly for me. So I wanted to put here the method that I found somewhere else.
The solution there talks about updating the ~/.ssh/config or /etc/ssh/ssh_config file with the following blocks of code.
To disable host key checking for a particular host (e.g., remote_host.com):
Host remote_host.com
StrictHostKeyChecking no
To turn off host key checking for all hosts you connect to:
Host *
StrictHostKeyChecking no
To avoid host key verification, and not use known_hosts file for 192.168.1.* subnet:
Host 192.168.0.*
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
I hope this helps someone else who runs into this issue.
AFAIK, the commands ssh or scp do not have/take a password parameter. Otherwise I could keep the password in a shell variable and probably get rid of the enter password prompt. If I write an scp command in my shell script, it prompts the user to input the password. I have multiple ssh and scp commands in my script and I do not want the user to enter the password every time. I would prefer to save the password in a shell variable in the beginning (by asking password once), then use it for every ssh or scp.
I read about "public key identification" in this question. Is it related to the solution I am looking for?
Update
I read in How to use ssh command in shell script? why it is unsafe to specify passwords on the commandline. Does using expect also store the password and is world visible (using ps aux)? Is that the security issue with using expect?
Further Explanation
To further make it clear, I am writing this shell script to automate code and database backup, do code upload, run necessary database queries, do all the things that are needed for a new version release of a LAMP project from a developer system to a remote live server. My shell script will be there inside the main codebase of the project in every developer instance.
Requirement
I want all developers (all may be working from different remote systems) knowing the SSH/FTP password to be able to use the shell by entering the ssh/ftp password same only at run-time once. I would prefer the password to be the ssh/ftp password
Note - I do not want other developers who don't know the SSH password to be able to use it (So I guess public key authentication will not work because it stores the passwords in the systems).
I do not want any command line solution which stores the password in some log in the system and can be world visible using ps aux or something.
Opening Bounty
From all the answers so far and my anaylsis of those solutions, it looks like other than public key authentication all others are insecure. I am not yet sure if using expect is insecure. I think it is otherwise the correct solution for me. In that case, I am getting command not found errors while trying to do that as already commented on one of the answers.
From http://www.debianadmin.com/sshpass-non-interactive-ssh-password-authentication.html -
First and foremost, users of sshpass
should realize that ssh’s insistance
on only getting the password
interactively is not without reason.
It is close to be impossible to
securely store the password, and users
of sshpass should consider whether
ssh’s public key authentication
provides the same end-user experience,
while involving less hassle and being
more secure.
So, is it not possible to securely run multiple ssh, scp commands by entering the ssh/ftp password (if only once at runtime? Please read my Requirement section again.
Also, can anyone explain this -
In particular, people writing programs
that satisfies are meant to
communicate the above points)password
programatically are encouraged to use
an anonymous pipe and pass the pipe’s
reading end to sshpass using the -d
option.
Does this mean anything is possible?
Indeed, you'll definitely want to look into setting up ssh keys, over saving a password in a bash script. If the key is passwordless, then no user input will be required to ssh/scp. You just set it up to use the key on both ends and voila, secured communication.
However, I'll get downvoted to hell if I don't say this. Many consider passwordless ssh keys to be a Bad Idea(TM). If anybody gets their hands on the keys, the have full access. This means that you are relying on other security measures such as file permissions to keep your password safe.
Also, look into ssh-agent. It allows you to set it up so that you have a password protected ssh-key, but you only need to type it in once and it will manage the password for the key for you and use it when necessary. On my linux box at home, I have ssh-agent set up to run in my .xinitrc file so that it prompts me once and then starts X. YMMV.
UPDATE:
With regards to your requirements, password protected public key authentication + ssh-agent still seems to fit. Only the developers privy to the SSH/FTP password could start up ssh-agent, type in the password and ssh-agent would manage the passwords for the public keys for the rest of the session, never requiring interaction again.
Of course, how it stores it is another matter entirely. IANASE, but for more information on security concerns of using ssh-agent, I found symantec's article to be pretty informative: http://www.symantec.com/connect/articles/ssh-and-ssh-agent
"The ssh-agent creates a unix domain
socket, and then listens for
connections from /usr/bin/ssh on this
socket. It relies on simple unix
permissions to prevent access to this
socket, which means that any keys you
put into your agent are available to
anyone who can connect to this socket.
[ie. root]" ...
"however, [..] they are only usable
while the agent is running -- root
could use your agent to authenticate
to your accounts on other systems, but
it doesn't provide direct access to
the keys themselves. This means that
the keys can't be taken off the
machine and used from other locations
indefinitely."
Hopefully you're not in a situation where you're trying to use an untrusted root's system.
The right way to do that is as follows:
Ensure that all your users are using ssh-agent (nowadays this is the default for most Linux systems). You can check it running the following command:
echo $SSH_AUTH_SOCK
If that variable is not empty, it means that the user is using ssh-agent.
Create a pair of authentication keys for every user ensuring they are protected by a non empty passphrase.
Install the public part of the authentication keys on the remote host so that users can log there.
You are done!
Now, the first time an user wants to log into the remote machine from some session it will have to enter the passphrase for its private key.
In later logins from the same session ssh-agent will provide the unlocked key for authentication in behalf of the user that will not be required to introduce the passphrase again.
Ugh. I hit the man pages hard for this. Here's what I got:
Use this code near the beginning of the script to silently get the ssh password:
read -p "Password: " -s SSHPASS # *MUST* be SSHPASS
export SSHPASS
And then use sshpass for ssh like so:
sshpass -e ssh username#hostname
Hope that helps.
You can Using expect to pass a password to ssh do this or as said already use public key authentication instead if that's a viable option.
For password authentication, as you mentioned in you description, you can use "sshpass". On Ubuntu, you can install as "sudo apt-get install sshpass".
For public/private key-pair base authentication,
First generate keys using, "ssh-keygen"
Then copy your key to the remote machine, using "ssh-copy-id username#remote-machine"
Once copied, the subsequent logins should not ask for password.
Expect is insecure
It drives an interactive session. If you were to pass a password via expect it would be no different from you typing a password on the command line except that the expect script would have retrieve the password from somewhere. It's typically insecure because people will put the password in the script, or in a config file.
It's also notoriously brittle because it waits on particular output as the event mechanism for input.
ssh-agent
ssh-agent is a fine solution if this is script that will always be driven manually. If there is someone who will be logged in to drive the execution of the script than an agent is a good way to go. It is not a good solution for automation because an agent implies a session. You usually don't initiate a session to automatically kick of a script (ie. cron).
ssh command keys
Ssh command keys is your best bet for an automated solution. It doesn't require a session, and the command key restricts what runs on the server to only the command specified in the authorized_keys. They are also typically setup without passwords. This can be a difficult solution to manage if you have thousands of servers. If you only have a few then it's pretty easy to setup and manage.
service ssh accounts
I've also seen setups with password-less service accounts. Instead of the command entry in tehh authorized_keys file, and alternative mechanism is used to restrict access/commands. These solutions often use sudo or restricted shells. However, I think these are more complicated to manage correctly, and therefore tend to be more insecure.
host to host automatic authentication
You can also setup host 2 host automatic authentication, but there are alot of things to get write to do this correctly. From setting up your network properly, using a bastion host for host key dissemination, proper ssh server configuration, etc. As a result this is not a solution a recommend unless you know what your doing and have the capacity and ability to set everything up correctly and maintain it as such.
For those for who setting up a keypair is not an option and absolutely need to perform password authentication, use $SSH_ASKPASS:
SSH_ASKPASS - If ssh needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If ssh does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS and open an X11 window to read the passphrase. This is particularly useful when calling ssh from a .xsession or related script. (Note that on some machines it may be necessary to redirect the input from /dev/null to make this work.)
E.g.:
$ echo <<EOF >password.sh
#!/bin/sh
echo 'password'
EOF
$ chmod 500 password.sh
$ echo $(DISPLAY=bogus SSH_ASKPASS=$(pwd)/password.sh setsid ssh user#host id </dev/null)
See also Tell SSH to use a graphical prompt for key passphrase.
Yes, you want pubkey authentication.
Today, the only way I was able to do this in a bash script via crontab was like that:
eval $(keychain --eval --agents ssh id_rsa id_dsa id_ed25519)
source $HOME/.keychain/$HOSTNAME-sh
This is with the ssh agent already running and to achieve that it was needed the passphrase.
ssh, ssh-keygen, ssh-agent, ssh-add and a correct configuration in /etc/ssh_config on the remote systems are necessary ingredients for securing access to remote systems.
First, a private/public keypair needs to be generated with ssh-keygen. The result of the keygen process are two files: the public key and the private key.
The public key file, usually stored in ~/.ssh/id_dsa.pub (or ~/.ssh/id_rsa.pub, for RSA encryptions) needs to be copied to each remote system that will be granting remote access to the user.
The private key file should remain on the originating system, or on a portable USB ("thumb") drive that is referenced from the sourcing system.
When generating the key pair, a passphrase is used to protect it from usage by non-authenticated users. When establishing an ssh session for the first time, the private key can only be unlocked with the passphrase. Once unlocked, it is possible for the originating system to remember the unlocked private key with ssh-agent. Some systems (e.g., Mac OS X) will automatically start up ssh-agent as part of the login process, and then do an automatic ssh-add -k that unlocks your private ssh keys using a passphrase previously stored in the keychain file.
Connections to remote systems can be direct, or proxied through ssh gateways. In the former case, the remote system only needs to have the public key corresponding to the available unlocked private keys. In the case of using a gateway, the intermediate system must have the public key as well as the eventual target system. In addition, the original ssh command needs to enable agent forwarding, either by configuration in ~/.ssh/config or by command option -A.
For example, to login to remote system "app1" through an ssh gateway system called "gw", the following can be done:
ssh -At gw ssh -A app1
or the following stanzas placed in the ~/.ssh/config file:
Host app1
ForwardAgent = yes
ProxyCommand = ssh -At gw nc %h %p 2>/dev/null
which runs "net cat" (aka nc) on the ssh gateway as a network pipe.
The above setup will allow very simple ssh commands, even through ssh gateways:
ssh app1
Sometimes, even more important than terminal sessions are scp and rsync commands for moving files around securely. For example, I use something like this to synchronize my personal environment to a remote system:
rsync -vaut ~/.env* ~/.bash* app1:
Without the config file and nc proxy command, the rsync would get a little more complicated:
rsync -vaut -e 'ssh -A gw' app1:
None of this will work correctly unless the remote systems' /etc/ssh_config is configured correctly. One such configuration is to remove "root" access via ssh, which improve tracking and accountability when several staff can perform root functions.
In unattended batch scripts, a special ssh key-pair needs to be generated for the non-root userid under which the scripts are run. Just as with ssh session management, the batch user ssh key-pair needs to be deployed similarly, with the public key copied to the remote systems, and the private key residing on the source system.
The private key can be locked with a passphrase or unlocked, as desired by the system managers and/or developers. The way to use the special batch ssh key, even in a script running under root, is to use the "ssh -i ~/.ssh/id_dsa" command options with all remote access commands. For example, to copy a file within a script using the special "batch" user access:
rsync -vaut -e 'ssh -i ~batch/.ssh/id_dsa -A gw' $sourcefiles batch#app2:/Sites/www/
This causes rsync to use a special ssh command as the remote access shell. The special-case ssh command uses the "batch" user's DSA private key as its identity. The rsync command's target remote system will be accessed using the "batch" user.
I am new to EC2. I created my security credentials from this site:
http://paulstamatiou.com/how-to-getting-started-with-amazon-ec2
It worked great, I rebooted and now when I try to connect I get a login/password prompt. (Which I never set up.) After several attempts I get this error:
Permission denied (publickey,gssapi-with-mic).
What am I doing wrong?
Two possibilities I can think of, although they are both mentioned in the link you referenced:
You're not specifying the correct SSH keypair file or user name in the ssh command you're using to log into the server:
ssh -i [full path to keypair file] root#[EC2 instance hostname or IP address]
You don't have the correct permissions on the keypair file; you should use
chmod 600 [keypair file]
to ensure that only you can read or write the file.
Try using the -v option with ssh to get more info on where exactly it's failing, and post back here if you''d like more help.
[Update]: OK, so this is what you should have seen if everything was set up properly:
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: Next authentication method: publickey
debug1: Trying private key: ec2-keypair
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Are you running the ssh command from the directory containing the ec2-keypair file ? If so, try specifying -i ./ec2-keypair just to eliminate path problems. Also check "ls -l [full path to ec2-keypair]" file and make sure the permissions are 600 (displayed as rw-------). If none of that works, I'd suspect the contents of the keypair file, so try recreating it using the steps in your link.
The key for me to be able to connect was to use the "ec2-user" user rather than root. I.e.:
ssh -i [full path to keypair file] ec2-user#[EC2 instance hostname or IP address]
+1
I noticed that for some AMIs like Amazon Linux, ec2-user#xxx.XX.XX.XXX would work. But for an ubuntu image, I had to use ubuntu# instead. It was never a problem with the .pem, just with the user name.
In my case it's because the permission for my home directory is 775, and SSH is not happy about it. It should work after executing:
server$ chmod go-w ~/
server$ chmod 700 ~/.ssh
server$ chmod 600 ~/.ssh/authorized_keys
I had very similar experience this afternoon. I was setting up django on EC2, and suddenly I cannot SSH into the box anymore. Glad I still had an active connection, so I modified /etc/ssh/sshd_config to set:
PasswordAuthentication yes
and set password for ec2-user, then I can login by entering the password.
However, after some googling I found this thread: http://ubuntuforums.org/showthread.php?t=577279. It turned out that during my setup of django I changed the permission for my home directory, and SSH is very strict about this. So the file permission must be set correctly.
I had met this problem too.And I found that happend beacuse I forgot to add the user-name before the host name:
like this:
ssh -i test.pem ec2-32-122-42-91.us-west-2.compute.amazonaws.com
and I add the user name:
ssh -i test.pem ec2-user#ec2-32-122-42-91.us-west-2.compute.amazonaws.com
it works!
Tagging on to mecca831's answer:
ssh -v -i generated-key.pem ec2-user#11.11.11.11
[ec2-user#ip-11.11.11.11 ~]$ sudo passwd ec2-user
newpassword
newpassword
[ec2-user#ip-11.11.11.11 ~]$ sudo vi /etc/ssh/sshd_config
Modify the file as follows:
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
# EC2 uses keys for remote access
#PasswordAuthentication no
Save
[ec2-user#ip-11.11.11.11 ~]$ sudo service sshd stop
[ec2-user#ip-11.11.11.11 ~]$ sudo service sshd start
you should be able to exit and ssh in as follows:
ssh ec2-user#11.11.11.11
and be prompted for password no longer needing the key.
Are you sure you have used the right instance? I ran into this problem and realized that something like 4 of the ubuntu instances i tried did not have SSH servers installed on them.
For a list of good servers see "Getting the images" about half way down. Sounds like you may be using something else... the default username is ubuntu on these images.
https://help.ubuntu.com/community/EC2StartersGuide
I was able to login using ec2-user
ssh -i [full path to keypair file] ec2-user#[EC2 instance hostname or IP address]
After about a half hour of searching and trying to debug this I was able to figure it out. My situation involved me using the same pem file for two different ec2 instance and it working for one and not the other.
My first instance it worked on was the standard aws linux ami amzn-ami-hvm-2014.03.2.x86_64-ebs. I simply used
ssh -i mypemfile.pem ec2-user#myec2ipaddress
and it worked.
I then launched a fedora instance Fedora-x86_64-19-20140407-sda and tried the same command but kept getting:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
After changing my username from ec2-user to fedora it worked!
ssh -i mypemfile.pem fedora#myec2address
None of the above helped me, but futzing with the user seemed like it had promise. For my config using 'ubuntu' was right.....
ssh -i [full path to keypair file] ubuntu#[EC2 instance hostname or IP address]
I recommend against setting a password as some other answers suggest. Using the key file is both safer (no one can guess your passwords) and more convenient (once you set up a config file). Here's a basic ~/.ssh/config:
Host my-ec2-server
HostName 11.11.11.11
User ec2-user
IdentityFile /path/to/generated-key.pem
Now you can just type ssh my-ec2-server and you're in! And as also mentioned in other answers, use -v to get extra info when your connection isn't working.
If the issue is consistent and happened about 10-15 times in a row even after changing file permissions to 400 or 600, then it is most certainly something is wrong on the ec2 instance, so to make sure:
Check the logs when you try to ssh to the instance by adding -v at the end and see either it gives out anything specific.
Make sure you use the correct name for ssh, like Ubuntu. Perhaps that depends on Linux distribution and users you added and either you've given permission for "root user" ssh.
Then if nothing helps, follow the documentation here https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html#TroubleshootingInstancesConnectingMindTerm
to fix that. It helped in my case, and it happened because of messed up directories/files permissions.
If you have a PPK file working on a PC, then export it as OpenSSH file using puttygen.exe for PC and use that on Mac (any Unix machine).
I was getting the same error --
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: Next authentication method: publickey
debug1: Trying private key: ec2-keypair
debug1: read PEM private key done: type RSA
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: No more authentication methods to try.
Permission denied (publickey,gssapi-with-mic)
As I was using a PPK file on Windows, I followed the steps as described above and Bingo!
$ ssh -i ec2-openssh-key root#ec2-instance-ip
I had the same problem using the AWS Toolkit for Eclipse. I created the Getting Started instance OK and opened a shell. However, the user was set to ec2-user. I used the Open Shell As... command and set the user to root. Then it worked.
Had a similar issue. Here are the steps used to setup SSH keys and forwarding on the Mac. Made these notes for myself - may help someone... check against your config.
The assumption here is there are no keys setup. If you already have the keys setup skip this section.
$ ssh‐keygen ‐t rsa ‐b 4096
Generating public/private rsa key pair.
Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]
Enter passphrase (empty for no passphrase): [Type a passphrase]
Enter same passphrase again: [Type passphrase again]
Modify ~/.ssh/config adding the entry for the key file:
~/.ssh/config should look similar to:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa
Store the private key in the keychain:
$ ssh‐add ‐K ~/.ssh/id_rsa
Go test it now with: ssh -A username#yourhostname
Should forward your key to yourhostname. Assuming your keys are added on you should connect without issue.
I was getting this error when I was trying to ssh into an ec2 instance on the private subnet from the bastion, to fix this issue, you've to run (ssh-add -K) as follow.
Step 1: run "chmod 400 myEC2Key.pem"
Step 2: run "ssh-add -K ./myEC2Key.pem" on your local machine
Step 3: ssh -i myEC2Key.pem root#ec2-107-20-4-100.compute-1.amazonaws.com
Step 4: Now try to ssh to EC2 instance that is on a private subnet without specifying the key, for example, try ssh ec2-user#ipaddress.
Hope this will help.
Note: This solution is for Mac.