EC2 instance putty re-used keypair not recognised - amazon-ec2

First up it's nearly impossible to identify if this is a duplicate or not given the generic nature of the displayed error. This is potentially a very specific and niche scenario. Please do not falsely mark this as duplicate. It's sad that I have to type this :(
I have 2 ec2 ubuntu instances. The first one created 6 months ago and working perfectly.
I created a second ubuntu instance this morning and told it to use the same keypair as the existing instance. They're in the same zones(?) but different base vpc. I selected it from the list so I presume it should be visible to both vpc's. From what I could see that should be ok.
Both are ubuntu so the user is "ubuntu". Both are using the same key pair. I merely cloned my saved putty config and changed the public IP before attempting to connect however i'm getting the "PuTTY Fatal Error: Disconnected: No supported authentication methods available (server sent: publickey) OK: error.
Unless i'm mistaken the same .ppk should work for both instances, that's the entire purpose of keypairs. The only thing I can presume is that AWS failed to associate the key pair with the new instance.
What are the likely reasons for this happening?
AWS documentation https://aws.amazon.com/premiumsupport/knowledge-center/linux-credentials-error/
Says to check the username "ubuntu" and key pair name which are both correct.
I'm going to blow the instance away and start again but it would be nice to be able to know what AWS is doing wrong so I can avoid the issue in the future.
Update: New instance... I imported a different pub key and used that and the problem persists when I try and connect with it's associated ppk.

The issue was the AWS Amazon Machine Image (AMI) I was using. I was using the latest ubuntu??. When I switched to the other ubuntu it worked fine.

Related

NLA error after updating AWS EC2 instance type

Has anyone encountered the NLA error after changing instance type for an EC2 instance?
Getting this error after upgrading a domain joined instance (t3.2xlarge) to the recommended instance type according to Computer Optimizer (m6i.2xlarge).
Cannot RDP using the local administrator account either, same NLA error.
Also made a re:post question but no answer yet.
Kind regards,
Ken
Things I have tried:
Changing back to t3.2xlarge, connected using domain credentials OK
Changing to m5.2xlarge, connected using domain crendentials OK
Added another NIC when it was on m6i.2xlarge, NLA error on the second interface.
(Don't think this matters, the instance is HVM) Upgraded to the latest PV driver, changing instance type to m6i.2xlarge, NLA error.
Launched a m6i.2xlarge instance in a different subnet(AZ), joined domain OK, connected using domain crendentials OK; changed to t3.2xlarge, NLA error; changed back to m6i.2xlarge, connected using domain crendentials OK
Launched another m6i.2xlarge instance in the same subnet as the t3.2xlarge, swapped the root volume, NLA error. Swapped back the volumes, connected OK.
All these tests leads me to think there is incompatibility b/w gen 5 Xeon processors and gen 6, which is strange, I thought at first it was a network card issue.
Copy pasted from my own answer on RE:POST
Managed to isolate the cause after performing some rescuing via SSM.
The issue seems to stem from the upgrade from the CPU generation leap.
I had always thought each component, Storage, Compute, and Networking are separate, but the ENI config was lost during the upgrade, so the server had trouble (i.e. did not know where the DNS server is) contacting the DCs for authentication.
Without this link to the DCs, NLA will never be met.
So if you are going to upgrade to the latest generation.
While on the current instance type (while you can RDP to the EC2 instance), navigate to System Properties and go to the Remote tab.
Untick the NLA option and apply and save the change.
Shutdown the instance and change to the desired instance type.
RDP to the instance using the Administrator account.
Here you will see that the network interface configuration is empty, so add your DNS server IP address back in here.
Confirm you have a connection to the DCs by pinging or something of the sort, then repeat step 1, but this time enable the NLA option and save.
Reboot and VoilĂ , you should now have access to the EC2 instance using your domain logins again.

Unreachable Amazon EC2 Instance

I have a running amazon-ec2 instance that contains a personal wiki. It has been running fine for years, and today suddenly I'm unable to logon using the private key .ppk file using either Putty or WinSCP! (An hour ago I still can!)
I was panicking and I rebooted the amazon EC2 instance. (I didn't stop and start the instance, I choose reboot).
My question is, is my data lost? And if not, how can I recover it? I can't ssh to the machine and it seems my .pem file or .ppk file which I generated long time back doesn't work anymore.
Your help is much appreciated, it saves me a lot of hard work! Thanks!
You can try starting another ec2 instance, and attaching the EBS volume(of the instance you care about) to it. Then all you have to do is to mount it and your data should be intact.
You'll have to turn off the original instance to do that first. Also, this presumes you don't have the drive encrypted.
Your data will be lost only if you used instance store. If you used EBS, your data is intact.
If you can't SSH to your server, use "AWS Systems Manager" to shell into your instance and debug SSH connection: check if sshd is up, .ssh/authorized_keys file contents and permissions, etc...

Opening Realm Dashboard on Amazon EC2

I'm trying to setup Realm Object Server on Amazon EC2.
I've used the public AMI on North Virginia, and I have a running instance. I'm doing all this from Europe as most of my users are in the USA.
Now I'm trying to access ec2-xx-xx-xx-xx.compute-1.amazonaws.com:9080.
I've tried to open the different ports as indicated but I feel that what I've done is incorrect.
I've also tried to open all traffic but I still have a timeout on the page. I'm probably doing something wrong here, I'm not sure what.
Thanks for your help!
Thanks for trying out our AWS AMI! It would be helpful to know the AMI ID that you ran, as that can help us track down problems for others. In fact, we've released new AMIs this morning. Check our website for the latest available AMI IDs.
In the meantime, can you check if the realm-object-server service is running? You can check this via SSH and by running:
sudo service realm-object-server status
So I managed to make things work!
I guess my issue is that I was somehow on the wrong security group.
When looking at your running instances, be sure to hit your security group at the right of the instance row, in order to be able to configure the correct one.
Then, configure a Custom TCP Rule with port 9080.
That's it!

AWS EC2 Instance Hacked

One of my EC2 instances was hacked a few days ago.
I tried logging in via SSH to the server, but I couldn't connect. I am the only one with access to the private key, and I keep it in a safe place.
Luckily, I had a backup of everything and was able to move the web app to a new instance quite fast.
My concern right now is that I don't know how my instance was hacked in the first place.
Why can't I log in via SSH using my private key? I would assume that the private key stored on the server can't be (easily) deleted.
Is there a way I can find out how the hacker gained access to the instance? Perhaps a log file that would point me in the right direction.
Should I attach the EBS volume in question to a new instance and see what's on it or what are my options in this case?
Right now, it seems I have to access at all to the hacked instance.
Thank you!
#Krishna Kumar R is correct about the hacker probably changing the ssh keys.
Next steps:
Security concerns (do these now!):
Stop the instance, but don't terminate yet
Revoke/expire any sensitive credentials that were stored on the instance, including passwords and keys for other sites and services. Everything stored on that instance should be considered compromised.
Post-mortem
Take an EBS snapshot of the instance's root volume (assuming that's where logs are stored)
Make a new volume from the snapshot and attach to a (non-production) instance
Mount and start reading logs. If this is a linux host and you have port 22 open in the firewall, I'd start with /<mount-point>/var/log/auth.log
They might have logged into your machine via password. In ssh config, check the value of: PasswordAuthentication. If it is set to yes, then users can login to the instance remotely via password. Check /var/log/secure for any remote logins. It will show up all logins (password or key based).
If someone logged in as 'root', they can modify the ssh keys.
The fact that you are unable to login to the machine does not mean that it has been "hacked". It could be due to a configuration change on the instance, or the instance might have changed IP address after a stop/start.
Do a search on StackOverflow for standard solutions to problems connecting to an instance and see if you can connect (eg recheck IP address, check security group, turn on ssh -v debugging, check network connectivity & VPC settings, view Get System Log, etc).
Worst case, yes, you could:
Stop the instance
Detach the EBS volume
Attach the EBS volume to another EC2 instance
Access the content of the EBS volume

SSH-ing to Google Compute Engine instance through Mac OS X terminal

I have created an instance on Google Compute Engine, but I can't seem to SSH using the terminal. The command that I used was :
gcloud compute ssh example-instance
The error that I got was:
ERROR: (gcloud.compute.ssh) Could not SSH to the instance. It is possible that your SSH key has not propagated to the instance yet. Try running this command again. If you still cannot connect, verify that the firewall and instance are set to accept ssh traffic.
When I googled the error, I was led to this link:Unable to SSH to Google Cloud
I went and checked the firewall rules here https://cloud.google.com/compute/docs/troubleshooting#ssherrors and things seem fine. I also went onto ~/.ssh and checked for google_compute_engine and google_compute_engine.pub which indicate the presence of my private and public key. I was wondering what I should do next? Is this is a problem specific to Mac workstations?
Any help would be really appreciated.
Personally, had some trouble getting my ssh keys setup correctly by following the Google Cloud Engine docs. Found another logical solution...
This didn't take long and solved the problem (i.e., simple ssh access to Google Cloud VM via MacOS Terminal)...
Follow these simple instructions provided by nixCraft:
https://www.cyberciti.biz/faq/google-cloud-compute-engin-ssh-into-an-instance-from-linux-unix-appleosx/
Here are a few other things to check:
Can you ssh into that instance from a browser, using the "SSH" button in the Cloud Console?
If not, try a newly created instance using default settings and compare how your example-instance differs.
Run gcloud config list and confirm that the values for project, account, region, and zone match your expectations.

Resources