How can I start an instance from the new EC2 CLI? - amazon-ec2

I have an ec2 micro instance. I can start it from the console, ssh into it (using a .pem file) and visit the website it hosts.
Using the old ec2 CLI, I can start the instance and perform other actions including ssh and website access.
I am having trouble with the new ec2 CLI. When I do "aws ec2 start-instances --instance-ids i-xxx" I get the message "A client error (InvalidInstanceID.NotFound) occurred when calling the StartInstances operation: The instance ID 'i-xxx' does not exist".
Clearly the instance exists, so I don't what the message is really indicating.
Here is some of my thinking:
One difference between the old and new CLI is that the the later used .pem files whereas the new interface uses access key pairs. The instance has an access key pair associated with is, but have tried all the credentials I can find, and none of them change anything).
I tried created an IAM user and a new access key pair for it. The behavior in all cases is unchanged (start from console or old CLI, web access, ssh) but not using the new CLI.
I realize that there is a means for updating the access key pairs by detaching the volume (as described here), but the process looks a little scary to me.
I realize that I can clone another instance from the image, but the image is a little out of date, and I don't want to lose my changes.
Can anyone suggest what the message really means and what I can do to get around the problem?

The problem had to do with credentials. I had the correct environment
variables (AWS_ACCESS_KEY and AWS_SECRET_KEY) set. But they didn't match what was in my .aws/credentials file. That is, despite what it says here, the new CLI worked only when I had the environment variables and
the credentials file correct and in sync.

Configure your aws cli with "aws configure" in new cli instance with region in which your ec2 instance reside. and then try to give the same command. The instance should start.

Related

AWS EC2: Instance from my own Windows AMI is not reachable

I am windows user and wanted to use a spot instance using my own EBS windows AMI. For this I have followed these steps:
I had my own on-demand instance with specific settings
Using AWS console I used option "Create Image EBS" to create EBS based windows AMI. IT worked and AMI created successfully
Then using this new AMI I launched a spot medium instance that was created well and now running with status checks passed.
After waiting an hour or more I am trying to connect it using windows 7 RDC client but is not reachable with client tool's standard error that either computer is not reachable or not powered on.
I have tried to achieve this goal and created/ deleted many volums, instances, snapshots everything but still unsuccessful. Doesn't anybody else have any solution to this problem?
Thanks
Basically what's happening is that the existing administrator password (and other user authentication information) for Windows is only valid in the original instance, and can't be used on the new "hardware" that you're launching the AMI on (even though it's all virtualized).
This is why RDP connections will fail to newly launched instances, as will any attempts to retrieve the administrator password. Unfortunately you have no choice but to shut down the new instances you've been trying to connect to because you won't be able to do anything with them.
For various reasons the Windows administrator password cannot be preserved when running a snapshot of the operating system on different hardware (even virtualized hardware) - this is a big part of the reason why technologies like Active Directory exist, so that user authentication information is portable between machines and networks.
It sounds like you did all the steps necessary to get this working except for one - you never took any steps to cause a new password to be assigned to your newly-launched instances based on the original AMI you created.
To fix this, BEFORE turning your instance into a custom AMI that can be used to launch new instances, you need to (in the original instance) run the Ec2ConfigService Settings tool (found in the start menu when remoted into the original instance using RDP), and enable the option to generate a new password on next reboot. Save this setting change.
Then when you do create an AMI from the original instance, and use that AMI to launch new instances, they will each boot up to your custom Windows image but will choose their own random administrator password.
At this point you can go to your ec2 dashboard and retrieve the newly-generated password for the new instance based on the old AMI, and you'll also be able to download the RDP file used to connect to it.
One additional note is that Amazon warns that it can take upwards of 30 minutes for the retrieval of an administrator password after launching a new instance, however in my previous experience I've never had to wait more than a few minutes to be able to get it.
Your problem is most likely that the security group you used to launch the AMI does not have RDP (TCP port #3389) enabled.
When you launch the windows AMI for the first time, AWS will populate the quicklaunch with this port enabled. However, when you launch the subsequent AMI, you will have to ensure that this port is open for your security group.

Is it possible to change an EC2 instance key name as shown in the AWS console?

I changed the AWS key for SSH access to an EC2 instance by creating a new key from the console and then added the public key value to authorized_key file, which worked as expected.
However, in the AWS console it is still showing the old key name as key for that instance. Will it be problem in long run in any case? How can I change the old key name to the newly created key name?
You cannot change the initial key name of an Amazon EC2 instance as shown in the AWS Management Console.
However, as you've already explored, it is not a problem to add another key for every day usage and won't be a problem in the long run either; in fact, you could also delete the former key pair via the AWS Management Console with zero impact on your already created instances, please see Eric Hammond's answer to consequences of deleted key pair on ec2 instance for more details on the subject matter.

AWS console not showing all instances during volume attach

I do the following using AWS web console:
Attach EBS volume-A to instance-A. Make some changes to data on volume-A and detach it
Launch new instance-B (in the same zone as instance-A)
Try attach volume-A to the new instance-B. But the new instance does not appear in the instances list during attach volume process (dialog box).
If I try the same attach using command line EC2 API (volume-A and instance-B), it works fine!
Do you know if this is a bug in AWS web console or am I doing something wrong in the console? Tried page refresh in Step #3 but it still would not list the new instance.
In order to attach, both volumes has to be in the same zone. So if you are going to attach a volume into a instance check the zone of the instance's attached volume. If those are not matching create a new instance with the same zone as the zone of the volume that you need to attached.
The volume and the instance have to be in the same region AND the same zone.
If you have a volume in us-east-1a and the instance in us-east-1b, you would need to move the volume to us-east-1b to make it work.
Even I had faced this problem yesterday and a day before. It looks like Amazon problem with their cache. Not sure WHY.
To bring back the stuff as is, I had to sign-out and make sure things are good. But it's always good to work with CLI, works better.
Although the user interface may not list the instance ID, you can attempt to add the volume anyway. If it's genuinely impossible (rather than a cache issue) you will get an error message.
Paste in the instance ID (i-xxxxxxx) manually then type your mount point (e.g. /dev/sdf) and click Attach.
For the benefit of others: some instance types do not support encrypted volumes, which may be why the instance doesn't appear in the list. I get the following error:
Error attaching volume: 'vol-12341234' is encrypted and 't2.medium' does not support encrypted volumes.

Why are two keypairs both allowing access to my EC2 instance based on a custom AMI?

I created an EBS-backed AMI from an Canonical Ubuntu Mavrick instance that was running with a keypair called us-west-01.pem
Then I started another instance using that AMI and at startup, assigned a new keypair to it called us-west-01.pem. However, when I tried to scp some data to the instance, I was able to get authenticated using us-west-01.pem:
scp -i /.ec2/us-west-01.pem -r /somepath/* ubuntu#myDnsValue:/somepath/
It also works with the correct us-west-02 key. I tried with another key, and it failed. The only explanation would be that the key used at the time of preparing the AMI is still accepted. How can I remove this so as to secure each instance with its own key?
Thanks in advance.
Depending on how you create the AMI (bundle or using rsync), you can remove or omit $HOME/.ssh/authorized_keys for the user ubuntu and root.

Web access amazon ec2 instance command-line

I lost access via ssh to my amazon ec2 instance and I need to access it NOW due to a problem with my service. I was told that there is a way to access the command-line via web with a java applet but I haven't been able to find it.
Is there a way to access the command-line without the .pem file? terminating/rebooting the instance is not feasible.
AFAIK it is not possible - Amazon does not retain private keys and once your instance has been assigned a keypair, it cannot be reassigned.
You could try to create a new instance with a separate keypair and ssh locally between them, but I don't imagine that that is possible.
If it's an EBS-based instance and you were able to stop it, you could mount the EBS volume to a new instance and copy a new key over; however, based on what you said, I don't believe it's possible. You may need to contact Amazon, but even then, there might not be anything that can be done.
Edit: on the same vein as the 2nd line, if you have other user accounts who have valid login shells, and you have sudo access on one of those accounts, you can do the same as mentioned in the last bit, where you generate a new keypair and upload the private key to ~/.ssh/authorized_keys.

Resources