Which key is in use when ssh logging into my EC2 instance? - amazon-ec2

After a ssh-keyscan I opened my generated known_hosts file, and found that there are 3 instead of 1 key for my EC2 instance.
**.**.**.** ssh-rsa ******
**.**.**.** ecdsa-sha2-nistp256 ***
**.**.**.** ssh-ed25519 ***
Are they all being used when logging in? Can I safely delete from them if some are not used at all?

If you run ssh-keygen -H -F <your EC2 URI>, it'll tell you which line(s) is/are being used.
E.g. # Host <whatever>.amazonaws.com found: line 15
If it's not used, you can delete it. Even if you delete one that's used, it'll just ask you to confirm that it's okay next time you connect.

Related

how to bypass fingeprint prompt for sftp on Windows 10 [duplicate]

I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.

Windows command echo Y doesn't work for sftp command for automation confirm [duplicate]

I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.

Copy SSH-keys between hosts

I'm performing:
# copy public key to other hosts
for host in ec2-master.eu-west-1.compute.amazonaws.com \
ec2xxx.compute.amazonaws.com \
ec2xxx.compute.amazonaws.com; \
do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
done
So I try to copy the key I've generated on ec2-master.eu-west-1.compute.amazonaws.com to the other servers.
But I still get
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
The authenticity of host 'ec2xxx.eu-west-1.compute.amazonaws.com (10.0.xx.xx)' can't be established.
ECDSA key fingerprint is 3a:63xx:a6:19:xx:23:d1:xx:06:22:xx:a0:b9:8c:xx:cf.
Are you sure you want to continue connecting (yes/no)?
So I got a permission denied. But I don't know why. What am I doing wrong?
Try changing the ssh-copy-id command to:
ssh-copy-id -i ~/.ssh/id_rsa.pub ec2-user#$host
(assuming you're using Amazon Linux -- use ubuntu as the user if you are using Ubuntu)
Update:
I think the problem may be because you are trying to copy a new key over to a host that only accepts logins using an existing key (no passwords allowed).
I couldn't get this to work with ssh-copy-id, but you can do it with a standard ssh command:
cat ~/.ssh/id_rsa.pub | ssh -i AWS_key.pem centos#$host "cat - >> ~/.ssh/authorized_keys"
Where AWS_key.pem is the private part of the key pair that AWS attached to your instance when you launched it.
SSH is trying to tell you that authentication into your hosts has failed and what authentication methods were tried.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
This is what the (publickey,gssapi-keyex,gssapi-with-mic) portion of the log output is telling you.
It is telling you it attempted to authenticate against publickey, gssapi-keyex, and gssapi-with-mic authentication methods.
Typically you or AWS provide an ssh keypair to be used prelaunch setup.
The sshd config is also set to authenticate using the keypairs (public + private key = Public Key Encryption hence publickey mentioned in the ssh log).
Therefore, your command
ssh-copy-id -i ~/.ssh/id_rsa.pub $host;
is wrong for a few reasons.
You don't specify a specific user to login against unless the username of your local host matches your remote machine (for AWS the user could be ec2-user, centos, ubuntu, etc
Even if the usernames were to match correctly, since AWS effectively (I am not familiar with GSSAPI) only enables ssh keypair authentication, you would only be able to login with the private key chosen or generated at EC2 instance creation.
If there were some alternative authentication mechanism configured on the host i.e. user:password then you would be able to run a modified version of the command.
REMOTE_USER=ec2-user
...
do ssh-copy-id -i ~/.ssh/id_rsa.pub $REMOTE_USER#$host
However, you would be prompted for a user/password each time.
Note The above command assumes you have enabled a user/pass authentication mechanism (Could be temporarily). However, for just 3 hosts I might just manually install the keypair at this point.
The language from the "Copy the key to a server" from sshd.com seems to imply that password-based authentication is enabled initially on the hosts.
"Once an SSH key has been created, the ssh-copy-id command can be used to install it as an authorized key on the server. Once the key has been authorized for SSH, it grants access to the server without a password."
I use this script and it works for me:
Сan you try this
for host in ${hosts[*]}
do
echo $host
ssh-keyscan $host | tee -a ~/.ssh/known_hosts
sshpass -p 'mypass' ssh-copy-id myuser#$host
done

How to automatically accept the remote key when rsyncing?

I'm attempting to make a system which automatically copies files from one server to many servers. As part of that I'm using rsync and installing SSH keys, which works correctly.
My problem is that when it attempts to connect to a new server for the first time it will ask for a confirmation. Is there a way to automatically accept?
Example command/output:
rsync -v -e ssh * root#someip:/data/
The authenticity of host 'someip (someip)' can't be established.
RSA key fingerprint is somerandomrsakey.
Are you sure you want to continue connecting (yes/no)? yes
You can add this host's key to known_hosts beforehand like this:
ssh-keyscan $someip >> ~/.ssh/known_hosts
If they genuinely are new hosts, and you can't add the keys to known_hosts beforehand (see York.Sar's answer), then you can use this option:
-e "ssh -o StrictHostKeyChecking=no"
I know that this was asked 3 years ago, but this was at the top of my google search and I was unable to get either of these solutions in the middle of a Vagrant script to work correctly for me. So I wanted to put here the method that I found somewhere else.
The solution there talks about updating the ~/.ssh/config or /etc/ssh/ssh_config file with the following blocks of code.
To disable host key checking for a particular host (e.g., remote_host.com):
Host remote_host.com
StrictHostKeyChecking no
To turn off host key checking for all hosts you connect to:
Host *
StrictHostKeyChecking no
To avoid host key verification, and not use known_hosts file for 192.168.1.* subnet:
Host 192.168.0.*
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
I hope this helps someone else who runs into this issue.

Permissions error when connecting to EC2 via SSH on Mac OSx

I am new to EC2. I created my security credentials from this site:
http://paulstamatiou.com/how-to-getting-started-with-amazon-ec2
It worked great, I rebooted and now when I try to connect I get a login/password prompt. (Which I never set up.) After several attempts I get this error:
Permission denied (publickey,gssapi-with-mic).
What am I doing wrong?
Two possibilities I can think of, although they are both mentioned in the link you referenced:
You're not specifying the correct SSH keypair file or user name in the ssh command you're using to log into the server:
ssh -i [full path to keypair file] root#[EC2 instance hostname or IP address]
You don't have the correct permissions on the keypair file; you should use
chmod 600 [keypair file]
to ensure that only you can read or write the file.
Try using the -v option with ssh to get more info on where exactly it's failing, and post back here if you''d like more help.
[Update]: OK, so this is what you should have seen if everything was set up properly:
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: Next authentication method: publickey
debug1: Trying private key: ec2-keypair
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Are you running the ssh command from the directory containing the ec2-keypair file ? If so, try specifying -i ./ec2-keypair just to eliminate path problems. Also check "ls -l [full path to ec2-keypair]" file and make sure the permissions are 600 (displayed as rw-------). If none of that works, I'd suspect the contents of the keypair file, so try recreating it using the steps in your link.
The key for me to be able to connect was to use the "ec2-user" user rather than root. I.e.:
ssh -i [full path to keypair file] ec2-user#[EC2 instance hostname or IP address]
+1
I noticed that for some AMIs like Amazon Linux, ec2-user#xxx.XX.XX.XXX would work. But for an ubuntu image, I had to use ubuntu# instead. It was never a problem with the .pem, just with the user name.
In my case it's because the permission for my home directory is 775, and SSH is not happy about it. It should work after executing:
server$ chmod go-w ~/
server$ chmod 700 ~/.ssh
server$ chmod 600 ~/.ssh/authorized_keys
I had very similar experience this afternoon. I was setting up django on EC2, and suddenly I cannot SSH into the box anymore. Glad I still had an active connection, so I modified /etc/ssh/sshd_config to set:
PasswordAuthentication yes
and set password for ec2-user, then I can login by entering the password.
However, after some googling I found this thread: http://ubuntuforums.org/showthread.php?t=577279. It turned out that during my setup of django I changed the permission for my home directory, and SSH is very strict about this. So the file permission must be set correctly.
I had met this problem too.And I found that happend beacuse I forgot to add the user-name before the host name:
like this:
ssh -i test.pem ec2-32-122-42-91.us-west-2.compute.amazonaws.com
and I add the user name:
ssh -i test.pem ec2-user#ec2-32-122-42-91.us-west-2.compute.amazonaws.com
it works!
Tagging on to mecca831's answer:
ssh -v -i generated-key.pem ec2-user#11.11.11.11
[ec2-user#ip-11.11.11.11 ~]$ sudo passwd ec2-user
newpassword
newpassword
[ec2-user#ip-11.11.11.11 ~]$ sudo vi /etc/ssh/sshd_config
Modify the file as follows:
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
# EC2 uses keys for remote access
#PasswordAuthentication no
Save
[ec2-user#ip-11.11.11.11 ~]$ sudo service sshd stop
[ec2-user#ip-11.11.11.11 ~]$ sudo service sshd start
you should be able to exit and ssh in as follows:
ssh ec2-user#11.11.11.11
and be prompted for password no longer needing the key.
Are you sure you have used the right instance? I ran into this problem and realized that something like 4 of the ubuntu instances i tried did not have SSH servers installed on them.
For a list of good servers see "Getting the images" about half way down. Sounds like you may be using something else... the default username is ubuntu on these images.
https://help.ubuntu.com/community/EC2StartersGuide
I was able to login using ec2-user
ssh -i [full path to keypair file] ec2-user#[EC2 instance hostname or IP address]
After about a half hour of searching and trying to debug this I was able to figure it out. My situation involved me using the same pem file for two different ec2 instance and it working for one and not the other.
My first instance it worked on was the standard aws linux ami amzn-ami-hvm-2014.03.2.x86_64-ebs. I simply used
ssh -i mypemfile.pem ec2-user#myec2ipaddress
and it worked.
I then launched a fedora instance Fedora-x86_64-19-20140407-sda and tried the same command but kept getting:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
After changing my username from ec2-user to fedora it worked!
ssh -i mypemfile.pem fedora#myec2address
None of the above helped me, but futzing with the user seemed like it had promise. For my config using 'ubuntu' was right.....
ssh -i [full path to keypair file] ubuntu#[EC2 instance hostname or IP address]
I recommend against setting a password as some other answers suggest. Using the key file is both safer (no one can guess your passwords) and more convenient (once you set up a config file). Here's a basic ~/.ssh/config:
Host my-ec2-server
HostName 11.11.11.11
User ec2-user
IdentityFile /path/to/generated-key.pem
Now you can just type ssh my-ec2-server and you're in! And as also mentioned in other answers, use -v to get extra info when your connection isn't working.
If the issue is consistent and happened about 10-15 times in a row even after changing file permissions to 400 or 600, then it is most certainly something is wrong on the ec2 instance, so to make sure:
Check the logs when you try to ssh to the instance by adding -v at the end and see either it gives out anything specific.
Make sure you use the correct name for ssh, like Ubuntu. Perhaps that depends on Linux distribution and users you added and either you've given permission for "root user" ssh.
Then if nothing helps, follow the documentation here https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html#TroubleshootingInstancesConnectingMindTerm
to fix that. It helped in my case, and it happened because of messed up directories/files permissions.
If you have a PPK file working on a PC, then export it as OpenSSH file using puttygen.exe for PC and use that on Mac (any Unix machine).
I was getting the same error --
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: Next authentication method: publickey
debug1: Trying private key: ec2-keypair
debug1: read PEM private key done: type RSA
debug1: Authentications that can continue: publickey,gssapi-with-mic
debug1: No more authentication methods to try.
Permission denied (publickey,gssapi-with-mic)
As I was using a PPK file on Windows, I followed the steps as described above and Bingo!
$ ssh -i ec2-openssh-key root#ec2-instance-ip
I had the same problem using the AWS Toolkit for Eclipse. I created the Getting Started instance OK and opened a shell. However, the user was set to ec2-user. I used the Open Shell As... command and set the user to root. Then it worked.
Had a similar issue. Here are the steps used to setup SSH keys and forwarding on the Mac. Made these notes for myself - may help someone... check against your config.
The assumption here is there are no keys setup. If you already have the keys setup skip this section.
$ ssh‐keygen ‐t rsa ‐b 4096
Generating public/private rsa key pair.
Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]
Enter passphrase (empty for no passphrase): [Type a passphrase]
Enter same passphrase again: [Type passphrase again]
Modify ~/.ssh/config adding the entry for the key file:
~/.ssh/config should look similar to:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa
Store the private key in the keychain:
$ ssh‐add ‐K ~/.ssh/id_rsa
Go test it now with: ssh -A username#yourhostname
Should forward your key to yourhostname. Assuming your keys are added on you should connect without issue.
I was getting this error when I was trying to ssh into an ec2 instance on the private subnet from the bastion, to fix this issue, you've to run (ssh-add -K) as follow.
Step 1: run "chmod 400 myEC2Key.pem"
Step 2: run "ssh-add -K ./myEC2Key.pem" on your local machine
Step 3: ssh -i myEC2Key.pem root#ec2-107-20-4-100.compute-1.amazonaws.com
Step 4: Now try to ssh to EC2 instance that is on a private subnet without specifying the key, for example, try ssh ec2-user#ipaddress.
Hope this will help.
Note: This solution is for Mac.

Resources