My aws Profile looks like:
[my_aws_profile]
aws_access_key_id = XXXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
region = us-east-1
My call:
s3 = Aws::S3::Client.new(region: 'us-east-1', credentials: Aws::SharedCredentials.new(profile_name: 'my_aws_profile'))
s3.put_object_acl(
bucket: $global_config['bucket'], key: someFile_key_json, grant_read: grant_read)
Rerturns
Aws::S3::Errors::MissingSecurityHeader: Your request was missing a required header
What am I doing wrong? ...
How do I fix this?
Related
I'm trying to run terraform-local to test out my modules before deployment. I've run into an error when trying to run my stack locally:
Error: Unsupported argument
on localstack_providers_override.tf line 67, in provider "aws":
67: meteringmarketplace = "http://localhost:4566"
An argument named "meteringmarketplace" is not expected here.
For context, my terraform templates specify the following resources
A lambda function with a node runtime
An API Gateway
Cloudwatch log groups, IAM roles, s3 objects and some other minor resources
I'm also running terraform v1.2.7 and terraform-local v1.2.7
Any idea how I might fix this error?
i get exactly the same error. I assume the terraform-local configurations are setting that "meteringmarketplace" which is not actually there anymore (I think it was renamed?).
A possibility is to do the local configuration directly yourself and not use terraform-local but terraform with your overwrites and let it run against localstack (https://github.com/localstack/localstack).
For an example i used the code from the terraform page:
main.tf:
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
s3_force_path_style = true
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
resource "aws_s3_bucket" "test-bucket" {
bucket = "my-bucket"
}
If you have your localstack running with the default settings you should be able to run "terraform plan" against it.
Maybe that helps you as a workaround.
I'm trying to generate a presigned url for an s3 bucket using Ruby.
client = Aws::S3::Client.new(
region: 'eu-west-1', #or any other region
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
)
#signer = Aws::S3::Presigner.new(client: client)
#signer.presigned_url(
:put_object,
bucket: ENV['S3_PROFILES_BUCKET'],
key: "test-#{SecureRandom.uuid}"
)
I try and take the url that is returned from this, something like:
"https://some-bucket.s3.eu-west-1.amazonaws.com/test-4ad40444-e907-4748-a025-a12515580450?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATTSSBDQFDFFX36UU4%2F20191204%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20191204T002242Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=31b0a90127f43e79462713b101b5fc80146c50f800cfce31c493d206ea142333"
When I try and make a POST (or PUT) request to this URL with an image binary (I'm using Postman) I get an error about the signature not being correct.
I am trying to create ec2 instance using terraform. Passing credentials through terraform cli fails, while hardcoding it in main.tf works fine
This is to create ec2 instance dynamically using terraform
terraform apply works with following main.tf
provider "aws" {
region = "us-west-2"
access_key = "hard-coded-access-key"
secret_key = "hard-coded-secret-key"
}
resource "aws_instance" "ec2-instance" {
ami = "ami-id"
instance_type = "t2.micro"
tags {
Name = "test-inst"
}
}
while the following does not work:
terraform apply -var access_key="hard-coded-access-key" -var secret_key="hard-coded-secret-key"
Is there any difference in the above two ways of running the commands? As per terraform documentation both of the above should work.
Every terraform module can use input variables, including the main module. But before using input variables, you must declare them.
Create a variables.tf file on the same folder you have your main.tf file:
variable "credentials" {
type = object({
access_key = string
secret_key = string
})
description = "My AWS credentials"
}
Then you can reference input variables in your code like that:
provider "aws" {
region = "us-west-2"
access_key = var.credentials.access_key
secret_key = var.credentials.secret_key
}
And you can either run:
terraform apply -var credentials.access_key="hard-coded-access-key" -var credentials.secret_key="hard-coded-secret-key"
Or you could create a terraform.tfvars file with the following content:
# ------------------
# AWS Credentials
# ------------------
credentials= {
access_key = "hard-coded-access-key"
secret_key = "hard-coded-secret-key"
}
And then simply run terraform apply.
But the key point is that you must declare input variables before using them.
The #Felipe answer is right, But I will never recommend defining Access key and secret key in variables.tf, What you have to to do is to left it blinks and set keys using aws configure or other options is to create keys for terraform deployment purpose only using aws configure --profile terraform or without profile aws configur
so your connection.tf or main.tf will look like this,
provider "aws" {
#You can use an AWS credentials file to specify your credentials.
#The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%\.aws\credentials" for Windows users
region = "us-west-2"
# profile configured during aws configure --profile
profile = "terraform"
# you can also restrict account here, to allow particular account for deployment
allowed_account_ids = ["12*****45"]
}
You can also tell separate file for secret key and access key, The reason behind this is, as Variables.tf is part of your configuration language or bitbucket, so it's better to not place these sensitive keys in variables.tf
You can create a file somewhere in your system and give the path of the keys in the provider section.
provider "aws" {
region = "us-west-2"
shared_credentials_file = "$HOME/secret/credentials"
}
Here is the format of the credentials file
[default]
aws_access_key_id = A*******Z
aws_secret_access_key = A*******/***xyz
So I am trying to write a simple script to connect to AWS s3 and create a bucket but I keep getting Access Denied (Aws::S3::Errors::AccessDenied)
This is my code
require 'aws-sdk'
require 'csv'
def test()
creds = CSV.read('accessKeys.csv')
s3_client = Aws::S3::Client.new(
region: 'us-west-2',
credentials: Aws::Credentials.new(creds[1][0], creds[1][1]),
)
s3 = Aws::S3::Resource.new(client: s3_client)
s3.create_bucket({
bucket: "dns-complaint-bucket",
})
end
test()
I have also attached AmazonS3FullAccess policy to the IAM user that I am using.
I have the following ruby code:
sts = Aws::STS::Client.new
stsresp = sts.assume_role(
:role_arn => _role_arn,
:role_session_name => "provisioning_vpc_query"
)
ec2 = Aws::EC2::Client.new(
session_token: stsresp.credentials["session_token"],
region: _region,
access_key_id: stsresp.credentials["access_key_id"],
secret_access_key: stsresp.credentials["secret_access_key"]
)
# ...
p "got image: #{preimage_id}, create encrypted copy..."
resp = ec2.copy_image({
encrypted: true,
name: oname,
source_image_id: preimage_id,
source_region: _region,
dry_run: false
})
In the code above, preimage_id is a known image in the region _region referenced above.
When I run this, I get:
"got image: ami-71e9020b, create encrypted copy..."
Aws::EC2::Errors::InvalidRequest: The storage for the ami is not available in the source region.
I can do this manually from the console with no trouble.
Can you help me figure out what's wrong?
turns out, I was attempting to copy the ami before it was 'available'. Adding a single line to wait did the trick:
sts = Aws::STS::Client.new
stsresp = sts.assume_role(
:role_arn => _role_arn,
:role_session_name => "provisioning_vpc_query"
)
ec2 = Aws::EC2::Client.new(
session_token: stsresp.credentials["session_token"],
region: _region,
access_key_id: stsresp.credentials["access_key_id"],
secret_access_key: stsresp.credentials["secret_access_key"]
)
# ...
p "got image: #{preimage_id}, create encrypted copy..."
ec2.wait_until(:image_available, {image_ids: [preimage_id]})
resp = ec2.copy_image({
encrypted: true,
name: oname,
source_image_id: preimage_id,
source_region: _region,
dry_run: false
})