I wanted to find out if there is a way to assign private IP when creating a spot instance. Looks like by default the AWS Console does not let you know that(its available when launching on-demand instance). But i am trying to figure out if i can assign private IP for spot instance.
my issue is, i have a image with all my work done(app install etc etc..) and when i use the image to fire off a spot instance and try to login to the webpage it is looking for that specific IP and as spot instance cant give me the desired IP things break...
any help is appreciated..
Answering my own question.. so maybe someone looking for this can find it helpful
the way to do this thru CLI is use below, First make sure to create an ENI(in which you have to specify subnet, private ip, sg)...once you have a ENI created run the below
ec2-request-spot-instances ami-ID -p 0.04 --key key_name \
--availability-zone us-east-1d \
--instance-type t1.micro -n 1 --type one-time \
--network-attachment eni-dd3889f0:0
just to clear things up with the last flag("--network-attachment eni-dd3889f0:0")
that is the ENI-ID and the :0 is the dev index. Dev index is the index you wish the interface to occupy (e.g. eth0). For example, using the above command i am assuming i wished it to be eth0.
Not sure why AWS does not support this thru the console, but atleast we have a option with cli..hope it helps someone.
Related
Hoping I'll catch a break as a complete ECS NOOB =)
So far I think I've managed to build out a good start, using ECS-CLI to create a cluster for two docker containers that go together and should run on the same instance (x2). However, at the time of either my ecs-cli up or my ecs-cli compose, I need to get the command to build the task in such a way that it can access a set of environment variables stored in AWS Parameter Store. I don't see any options to do that with ecs-cli, for neither 'up' nor 'compose'.
What am I missing? Help! I'm not sure yet what other info to provide, or I'd post all that I can, so please let me know what other data/background you need if you think you have an answer.
All comments very much appreciated of course!
See the secrets section in the ecs-cli documentation here. You would configure ECS to pull those secrets from SSM Parameter Store and inject them as environment variables into your container using that method.
I want to deploy my infrastructure in different AWS environments (dev, prod, qa).
That deployment creates a few EC2 instances from a custom AMI. When deployed, instances are in the "running" state. I understand this seems to be related to some constraint in the EC2 API. However, I don't necessarily want my instances started, depending on context. Sometimes, I just want the instances to be created, and they will be started later on. I guess this is a quite common scenario.
Reading the few related issues/requests on Hashicorp's github, makes me think so:
Terraform aws instance state change
Stop instances
aws_instance should allow to specify the instance state
There must be some TerraForm based solution which doesn't require to rely on AWS CLI / CDK or lambda, right? Something in the TerraForm script that, for example, would stop the instance right after its creation.
My google foo didn't help me much here. Any help / suggestion for dealing with that scenario is welcome.
Provisioning a new instance automatically puts it in a 'started' state.
As Marcin suggested, you can use user data scripts, here's some psuedo user data script. For you to figure out the actual implementation ;)
#!/bin/bash
get instance id, pass it to the subsequent line
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
You can read about running scripts as part of the bootstrapping here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Basically its all up to your use case. We don't do this generally. Still, if you want to provision your EC2 instances and need to put them in stopped state, as bschaatsbergen suggested, you can use the user_data in Terraform. Make sure to attach the role with relevant permission.
#!/bin/bash
INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id/`
REGION=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/.$//'`
aws ec2 stop-instances --instance-ids $INSTANCE_ID --region $REGION
As already stated by others, you cannot just "create" instances, they will be in "started" state.
Rather I would ask what is the exact use case here :
Sometimes, I just want the instances to be created, and they will be started later on.
Why you have to create instances now and use them later? Can't they be created exactly when they are required? Any specific requirement to keep them initialized before they are used? Or the instances take time to start?
I have an ec2 micro instance. I can start it from the console, ssh into it (using a .pem file) and visit the website it hosts.
Using the old ec2 CLI, I can start the instance and perform other actions including ssh and website access.
I am having trouble with the new ec2 CLI. When I do "aws ec2 start-instances --instance-ids i-xxx" I get the message "A client error (InvalidInstanceID.NotFound) occurred when calling the StartInstances operation: The instance ID 'i-xxx' does not exist".
Clearly the instance exists, so I don't what the message is really indicating.
Here is some of my thinking:
One difference between the old and new CLI is that the the later used .pem files whereas the new interface uses access key pairs. The instance has an access key pair associated with is, but have tried all the credentials I can find, and none of them change anything).
I tried created an IAM user and a new access key pair for it. The behavior in all cases is unchanged (start from console or old CLI, web access, ssh) but not using the new CLI.
I realize that there is a means for updating the access key pairs by detaching the volume (as described here), but the process looks a little scary to me.
I realize that I can clone another instance from the image, but the image is a little out of date, and I don't want to lose my changes.
Can anyone suggest what the message really means and what I can do to get around the problem?
The problem had to do with credentials. I had the correct environment
variables (AWS_ACCESS_KEY and AWS_SECRET_KEY) set. But they didn't match what was in my .aws/credentials file. That is, despite what it says here, the new CLI worked only when I had the environment variables and
the credentials file correct and in sync.
Configure your aws cli with "aws configure" in new cli instance with region in which your ec2 instance reside. and then try to give the same command. The instance should start.
I do the following using AWS web console:
Attach EBS volume-A to instance-A. Make some changes to data on volume-A and detach it
Launch new instance-B (in the same zone as instance-A)
Try attach volume-A to the new instance-B. But the new instance does not appear in the instances list during attach volume process (dialog box).
If I try the same attach using command line EC2 API (volume-A and instance-B), it works fine!
Do you know if this is a bug in AWS web console or am I doing something wrong in the console? Tried page refresh in Step #3 but it still would not list the new instance.
In order to attach, both volumes has to be in the same zone. So if you are going to attach a volume into a instance check the zone of the instance's attached volume. If those are not matching create a new instance with the same zone as the zone of the volume that you need to attached.
The volume and the instance have to be in the same region AND the same zone.
If you have a volume in us-east-1a and the instance in us-east-1b, you would need to move the volume to us-east-1b to make it work.
Even I had faced this problem yesterday and a day before. It looks like Amazon problem with their cache. Not sure WHY.
To bring back the stuff as is, I had to sign-out and make sure things are good. But it's always good to work with CLI, works better.
Although the user interface may not list the instance ID, you can attempt to add the volume anyway. If it's genuinely impossible (rather than a cache issue) you will get an error message.
Paste in the instance ID (i-xxxxxxx) manually then type your mount point (e.g. /dev/sdf) and click Attach.
For the benefit of others: some instance types do not support encrypted volumes, which may be why the instance doesn't appear in the list. I get the following error:
Error attaching volume: 'vol-12341234' is encrypted and 't2.medium' does not support encrypted volumes.
I created an EBS-backed AMI from an Canonical Ubuntu Mavrick instance that was running with a keypair called us-west-01.pem
Then I started another instance using that AMI and at startup, assigned a new keypair to it called us-west-01.pem. However, when I tried to scp some data to the instance, I was able to get authenticated using us-west-01.pem:
scp -i /.ec2/us-west-01.pem -r /somepath/* ubuntu#myDnsValue:/somepath/
It also works with the correct us-west-02 key. I tried with another key, and it failed. The only explanation would be that the key used at the time of preparing the AMI is still accepted. How can I remove this so as to secure each instance with its own key?
Thanks in advance.
Depending on how you create the AMI (bundle or using rsync), you can remove or omit $HOME/.ssh/authorized_keys for the user ubuntu and root.