nothing happens on libcloud create_node - amazon-ec2

I use libcloud to create a new node instance Amazon EC2.
conn.create_node returns me a valid instance and printing node.dict shows the expected values.
however when I check my EC2 dashboard the new machine does not appear there.
do I need my python app to stay open so that the node is actually created?

found it: the instance was actually created but for some reason the amazon did not show it even after refresh. logging out and in solved it.

Related

Can't delete EC2 instance or ECS volume

I'm trying to remove an EC2 instance that was added by mistake while trying to figure out how to deploy my API code.
Every time I terminate it, another one appears.
I now have a list of terminated instances, and 1 too many running instances.
I also have an extra EBS instance which I need to remove but can't because the Delete option is disabled.
When I read the docs it says this should work, there is no mention of the delete option not being available.
I can detach the volume, but then another one appears.
This question is similar to another one. That one suggests the problem may be caused by a cluster, but I don't have any. I followed the instructions just in case but none are listed.
There is likely an autoscaling group that is recreating it. Open the EC2 console and click Auto Scaling Groups in the left-side menu. Delete the ASG and any remaining instances should automatically be terminated.

How can I start an instance from the new EC2 CLI?

I have an ec2 micro instance. I can start it from the console, ssh into it (using a .pem file) and visit the website it hosts.
Using the old ec2 CLI, I can start the instance and perform other actions including ssh and website access.
I am having trouble with the new ec2 CLI. When I do "aws ec2 start-instances --instance-ids i-xxx" I get the message "A client error (InvalidInstanceID.NotFound) occurred when calling the StartInstances operation: The instance ID 'i-xxx' does not exist".
Clearly the instance exists, so I don't what the message is really indicating.
Here is some of my thinking:
One difference between the old and new CLI is that the the later used .pem files whereas the new interface uses access key pairs. The instance has an access key pair associated with is, but have tried all the credentials I can find, and none of them change anything).
I tried created an IAM user and a new access key pair for it. The behavior in all cases is unchanged (start from console or old CLI, web access, ssh) but not using the new CLI.
I realize that there is a means for updating the access key pairs by detaching the volume (as described here), but the process looks a little scary to me.
I realize that I can clone another instance from the image, but the image is a little out of date, and I don't want to lose my changes.
Can anyone suggest what the message really means and what I can do to get around the problem?
The problem had to do with credentials. I had the correct environment
variables (AWS_ACCESS_KEY and AWS_SECRET_KEY) set. But they didn't match what was in my .aws/credentials file. That is, despite what it says here, the new CLI worked only when I had the environment variables and
the credentials file correct and in sync.
Configure your aws cli with "aws configure" in new cli instance with region in which your ec2 instance reside. and then try to give the same command. The instance should start.

How to create a ec2 instance in cloudfomration from the AMI of the instance which is also created by cloudformation

I have created a EC2 Instance using the cloud formation script and in that process I have executed around 20 commands in "AWS::CloudFormation::Init". This is a windows instance.
After that, I created a image from this and tried to create another EC2 instance using this image with couple of commands I wanted to be executed in "AWS::CloudFormation::Init".
This is giving me problem. The instance after getting created is not running the new commands which I specified in the template. But trying to run the commands which I specified while creating the old EC2Instance from which image was taken. This is through sysprep process which was given in one of the docs.
Is there any way to execute only the new commands and leaving out the old commands when the new image is created. I tried many alternatives. It is either executing old commands or none at all.
Have you stopped your instance before creating your image ? (like advised here).

can not login to custom ami

I am trying to initiate an instance that is found here...
https://aws.amazon.com/amis/aws-tools
The instance is launched but when I try to login, I get the following message:
ssh -i oct9.pem root#ec2-50-16-125-42.compute-1.amazonaws.com
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
If I launch a new instance using the built-in wizard, It works as expected with the same .pem key.
This AMI was working as expected till recently. I have used it before for a few instances. I would like to use this because it has several utilities pre-installed.
When you produce a new image from a running instance, you end up getting locked out of the running instance. I'm not sure why, but you can then re-launch a new instance from the image you just created.
It's unclear whether or not this is the issue you're running into, though.

AWS console not showing all instances during volume attach

I do the following using AWS web console:
Attach EBS volume-A to instance-A. Make some changes to data on volume-A and detach it
Launch new instance-B (in the same zone as instance-A)
Try attach volume-A to the new instance-B. But the new instance does not appear in the instances list during attach volume process (dialog box).
If I try the same attach using command line EC2 API (volume-A and instance-B), it works fine!
Do you know if this is a bug in AWS web console or am I doing something wrong in the console? Tried page refresh in Step #3 but it still would not list the new instance.
In order to attach, both volumes has to be in the same zone. So if you are going to attach a volume into a instance check the zone of the instance's attached volume. If those are not matching create a new instance with the same zone as the zone of the volume that you need to attached.
The volume and the instance have to be in the same region AND the same zone.
If you have a volume in us-east-1a and the instance in us-east-1b, you would need to move the volume to us-east-1b to make it work.
Even I had faced this problem yesterday and a day before. It looks like Amazon problem with their cache. Not sure WHY.
To bring back the stuff as is, I had to sign-out and make sure things are good. But it's always good to work with CLI, works better.
Although the user interface may not list the instance ID, you can attempt to add the volume anyway. If it's genuinely impossible (rather than a cache issue) you will get an error message.
Paste in the instance ID (i-xxxxxxx) manually then type your mount point (e.g. /dev/sdf) and click Attach.
For the benefit of others: some instance types do not support encrypted volumes, which may be why the instance doesn't appear in the list. I get the following error:
Error attaching volume: 'vol-12341234' is encrypted and 't2.medium' does not support encrypted volumes.

Resources