"ERROR: You have not provided a valid image (AMI) value" in knife ec2 server create - amazon-ec2

I am trying out the https://github.com/chef/knife-ec2. After bundle installing the gems, i configured the knife.rb to something like this:
current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name "username9999"
client_key "#{current_dir}/username9999"
validation_client_name "name_aws_test-validator"
validation_key "#{current_dir}/name_aws_test-validator.pem"
chef_server_url "https://api.opscode.com/organizations/name_aws_test"
cookbook_path ["#{current_dir}/../cookbooks"]
knife[:availability_zone] = "US West (Oregon)"
#knife[:region] = "Oregon"
knife[:image] = "ami-eb99b2db"
knife[:flavour] = "t2.micro"
knife[:aws_access_key_id] = "AKXXXXXXTTTTTTXXXX"
knife[:aws_secret_access_key] = "PrabchdthsoelfmhuhgyE"
knife[:aws_ssh_key_id] = 'ec2-test'
now the knife ec2 server create -r something returns this:
ERROR: You have not provided a valid image (AMI) value
I have made sure that i am not faulting on the ami that i copied from the community ami's. So say this is the community thing:
Centos6-template-clean-hvm - ami-07d4f737
i am taking the ami as ami-07d4f737. Then due to the persistent error, i have created a new private ami for myself. It still returns the same. Any suggestions?
PS: verbosity returns nothing useful

This error could be due to one of following reasons:
You have correct AMI ID but a wrong region. Check whether the "Oregon" region has the AMI ID that you are using. Also, the region name is case-sensitive.
You have a wrong AMI ID
Probably, you do not have privileges to access this AMI, but in that case it would have said permission/Access denied kinda error.
Besides, in your knife.rb settings, the value for "Availability Zone" looks wrong. There is no such AZ called "US West (Oregon)".
For Oregon region, it is either us-west-2a or us-west-2b or us-west-2c

Related

Deploy .sh file in ec2 using terraform

i am trying to deploy *.sh file located in my localhost to ec2,using terraform.Note that all infrastructure i am creating via terraform.So for copy file to the remote host i am using terraform provisioner.The question is,how i can find out a private_key or password for ubuntu-user for deploying.Or maybe somebody knows different solution.The goal to run .sh file in ec2.Thanks before hand)
If you want to do it using a provisioner and you have the private key local to where Terraform is being executed, then SCSI-9's solution should work well.
However, if you can't ensure the private key is available then you could always do something like how Elastic Beanstalk deploys and use S3 as an intermediary.
Something like this.
resource "aws_s3_bucket_object" "script" {
bucket = module.s3_bucket.bucket_name
key = regex("([^/]+$)", var.script_file)[0]
source = var.script_file
etag = filemd5(var.script_file)
}
resource "aws_instance" "this" {
depends_on = [aws_s3_bucket_object.script]
user_data = templatefile("${path.module}/.scripts/userdata.sh" {
s3_bucket = module.s3_bucket.bucket_name
object_key = aws_s3_bucket_object.script.id
}
...
}
And then somewhere in your userdata script, you can fetch the object from s3.
aws s3 cp s3://${s3_bucket}/${object_key} /some/path
Of course, you will also have to ensure that the instance has permissions to read from the s3 bucket, which you can do by attaching a role to the EC2 instance with the appropriate policy.

Ruby: S3 access with AWS instance profile

I have a ec2 instance which has a profile attached. I can use awscli and it uploads to the bucket fine.
root#ocr-sa-test:/# aws s3 ls s3://company-ocr-east/
PRE 7_day_expiry/
root#ocr-sa-test:/# touch foo
root#ocr-sa-test:/# aws s3 cp foo s3://company-ocr-east/foo
upload: ./foo to s3://company-ocr-east/foo
root#ocr-sa-test:/# aws s3 rm s3://company-ocr-east/foo
delete: s3://company-ocr-east/foo
I can't get it to work with the aws-sdk in ruby though. I get access denied.
irb(main):001:0> require "aws-sdk"
=> true
irb(main):002:0>
irb(main):003:0> credentials = Aws::InstanceProfileCredentials.new
irb(main):004:1* client = Aws::S3::Client.new(
irb(main):005:1* region: "us-east-1",
irb(main):006:1* credentials: credentials,
irb(main):007:0> )
irb(main):008:0>
irb(main):009:0>
irb(main):010:0>
irb(main):011:1* begin
irb(main):012:2* client.put_object(
irb(main):013:2* key: 'hello.txt',
irb(main):014:2* body: 'Hello World!',
irb(main):015:2* bucket: 'company-ocr-east',
irb(main):016:2* content_type: 'text/plain'
irb(main):017:1* )
irb(main):018:1* rescue Exception => e
irb(main):019:1* puts "S3 Upload Error: #{e.class} : Message: #{e.message}"
irb(main):020:0> end
S3 Upload Error: Aws::S3::Errors::AccessDenied : Message: Access Denied
These commands aren't perfectly equivalent, so it'll be instructive to determine what exactly differs on the wire as a result. In particular, the SDK is being instructed to use a specific region and to obtain STS tokens from IMDS, whilst the CLI is left to work things out from either its own defaults or a profile config. Besides which, they don't behave exactly the same.
To find out what's actually happening, means re-running both with applicable debug flags, viz:
aws --debug s3 cp hello.txt s3://bucketname/hello.txt
and
credentials = Aws::InstanceProfileCredentials.new(http_debug_output: $stdout)
client = Aws::S3::Client.new(region: 'us-east-1', credentials: credentials, http_wire_trace: true)
client.put_object(key: 'hello.txt', body: 'Hello World!', bucket: 'bucketname', content_type: 'text/plain')
These will generate heaps of output but it's all relevant and, crucially, comparable once you look past the noise. The first thing to verify is that the CLI is definitely talking to IMDS (it'll have requests to http://169.254.169.254 that culminate with something like "found credentials from IAM Role". If not, then the instance isn't configured how you thought, and there'll be clues in the log to explain how it is getting credentials, e.g. unexpected profile file, or environment variables. You'll also want to check they are obtaining the same role.
The second thing to compare is the subsequent sequences of PUT they both attempt. At this point in the debugging, almost everything else is equal, so it's very likely you can adjust the settings of the Ruby SDK client to match whatever the CLI is succeeding with.
The third possibility is that of a system firewall, or some kind of process-level mandatory access controls, user permissions, cgroups/containers etc. However, debugging your OS kernel & configuration would be a deep, dark rabbit hole, and in any case you've said this is "an EC2 instance" so it is, presumably, a plain old EC2 instance. If in fact the Ruby commands above are running under a different user ID, or inside a container, then maybe there's your answer already, it could well be a networking issue due to user/container/security controls or similar OS-level configuration that needs fixing up.
Obligatory warning: if you choose to post any of the log data, be careful to overwrite any credentials! I don't believe these debug traces are particularly replayable, but you don't want to find out the hard way if I'm wrong.
The access denied error may be caused by the "very aggressive" default timeout in Aws::InstanceProfileCredentials.
Try initializing it with a longer timeout or additional retries:
credentials = Aws::InstanceProfileCredentials.new({
retries: 2, # Integer, default: 1
http_open_timeout: 2.5, # Float, default: 1
http_read_timeout: 2.5 # Float, default: 1
})
The docs did not make clear if the timeout options were given as seconds or another duration. 2.5 seemed conservative, given the default. Further tweaking may be needed.
The AWS docs for the v3 Ruby API discuss the aggressive timeout in the Aws::S3::Client docs and you can see options to configure Aws::InstanceProfileCredentials.

Parsing error Terraform

So I tried to spin up an EC2 instance using Terraform on my Mac (which is running Sierra and Terraform 0.11.5) but keep getting a few errors:
Command: terraform plan
Error: Error parsing /Users/*****/terraform/aws.tf: At 1:11: illegal char
Command: terraform show
Error: Failed to load backend: Error loading backend config: Error parsing /Users/******/terraform/aws.tf: At 1:11: illegal char
Here is what my file looks like:
provider "aws" {
region = "us-east-1"
access_key = ""
secret_key = "********"
}
resource "aws_key_pair" "nick-key" {
key_name = "nick-key"
public_key = "ssh-rsa ********************************************"
}
resource "aws_instance" "web" {
ami = "ami-1853ac65"
instance_type = "t2.micro"
key_name = "${aws_key_pair.nick-key.key_name}"
I put * in place of the real information used in the file in case anyone was wondering. Any help would be greatly appreciated!! Thank you in advance!
To answer the question but also provide feedback on how to ensure your format is correct.
As mentioned in the comment the example is missing a closing curly brace
resource "aws_instance" "web" {
ami = "ami-1853ac65"
instance_type = "t2.micro"
key_name = "${aws_key_pair.nick-key.key_name}"
}
Terraform has a validate command that will check for these formatting issues. If you run on the example above you will see
$ terraform validate
Error: Error parsing test.tf: object expected closing RBRACE got: EOF
Ensure you are calling the correct version of terraform from the terminal.
I had a parsing error like this when using terraform v11, to run scripts written for terraform v12.
Sometimes this can be easily done if you have two versions of terraform installed.
Make sure you have set up each alias in your bash profile (or appropriate shell profile file) and are using the correct command.
I tend to have the following set up in my working environment:
alias terraform='/usr/local/bin/terraform' #points to terraform 12 installation
alias terraform11='/usr/local/bin/terraform11'

Terraform aws getting started issues

i'm running on last version of windows and i'm trying to use terraform for aws for the first time. I've created a free account everything is ready to work.
here is my test.tf
provider "aws" {
access_key = "XXXXXXXXXXXXXXXXX" // don't worry i change this
secret_key = "XXXXXXXXXXXXXXXXXXXXXXXXXX" // this too
region = "eu-west-1" #Irlande
}
resource "aws_instance" "bastion" {
ami = "ami-0d063c6b"
instance_type = "t2.micro"
}
and when i terraform plan this nothing happen :
Any solution to this issue ?
Thanks in advance
I guess you run with latest terraform.
Did you run terraform init first? If you use aws as provider, you should be fine to use s3 as backend
Take a look at Terraform init usage

Chef aws driver tags don't work using Etc.getlogin

I am currently using Chef solo on a Windows machine. I used the fog driver before which created tags for my instances on AWS. Recently, I moved to the aws driver and noticed that aws driver does not handle tagging. I tried writing my own code to create the tags. One of the tags being "Owner" which tells me who created the instance. For this, I am using the following code:
def get_admin_machine_options()
case get_provisioner()
when "cccis-environments-aws"
general_machine_options = {ssh_username: "root",
create_timeout: 7000,
use_private_ip_for_ssh: true,
aws_tags: {Owner: Etc.getlogin.to_s}
}
general_bootstrap_options = {
key_name: KEY_NAME,
image_id: "AMI",
instance_type: "m3.large",
subnet_id: "subnet",
security_group_ids: ["sg-"],
}
bootstrap_options = Chef::Mixin::DeepMerge.hash_only_merge(general_bootstrap_options,{})
return Chef::Mixin::DeepMerge.hash_only_merge(general_machine_options, {bootstrap_options: bootstrap_options})
else
raise "Unknown provisioner #{get_setting('CHEF_PROFILE')}"
end
end
machine admin_name do
recipe "random.rb"
machine_options get_admin_machine_options()
ohai_hints ohai_hints
action $provisioningAction
end
Now, this works fine on my machine. The instance is created on my machine with proper tags but when I run the same code on someone else's machine. It doesn't create the tags at all. I find this to be very weird. Does anyone know what's happening? I have the same code!
Okay so I found the issue. I was using the gem chef-provisioning-aws 1.2.1 and everyone else was on 1.1.1
the gem 1.1.1 does not have support for tagging so it just went right past it.
I uninstalled the old gem and installed the new one. It worked like a charm!

Resources