I have the following ruby code:
sts = Aws::STS::Client.new
stsresp = sts.assume_role(
:role_arn => _role_arn,
:role_session_name => "provisioning_vpc_query"
)
ec2 = Aws::EC2::Client.new(
session_token: stsresp.credentials["session_token"],
region: _region,
access_key_id: stsresp.credentials["access_key_id"],
secret_access_key: stsresp.credentials["secret_access_key"]
)
# ...
p "got image: #{preimage_id}, create encrypted copy..."
resp = ec2.copy_image({
encrypted: true,
name: oname,
source_image_id: preimage_id,
source_region: _region,
dry_run: false
})
In the code above, preimage_id is a known image in the region _region referenced above.
When I run this, I get:
"got image: ami-71e9020b, create encrypted copy..."
Aws::EC2::Errors::InvalidRequest: The storage for the ami is not available in the source region.
I can do this manually from the console with no trouble.
Can you help me figure out what's wrong?
turns out, I was attempting to copy the ami before it was 'available'. Adding a single line to wait did the trick:
sts = Aws::STS::Client.new
stsresp = sts.assume_role(
:role_arn => _role_arn,
:role_session_name => "provisioning_vpc_query"
)
ec2 = Aws::EC2::Client.new(
session_token: stsresp.credentials["session_token"],
region: _region,
access_key_id: stsresp.credentials["access_key_id"],
secret_access_key: stsresp.credentials["secret_access_key"]
)
# ...
p "got image: #{preimage_id}, create encrypted copy..."
ec2.wait_until(:image_available, {image_ids: [preimage_id]})
resp = ec2.copy_image({
encrypted: true,
name: oname,
source_image_id: preimage_id,
source_region: _region,
dry_run: false
})
Related
I would like to know how to register a new user using AWS Cognito Ruby SDK.
So far I have tried:
Input
AWS_KEY = "MY_AWS_KEY"
AWS_SECRET = "MY_AWS_SECRET"
client = Aws::CognitoIdentityProvider::Client.new(
access_key_id: AWS_KEY,
secret_access_key: AWS_SECRET,
region: 'us-east-1',
)
resp = client.sign_up({
client_id: "4d2c7274mc1bk4e9fr******", # required
username: "test#test.com", # required
password: "Password23sing", # required
user_attributes: [
{
name: "app", # required
value: "my app name",
},
],
validation_data: [
{
name: "username", # required
value: "true",
},
]
})
Output
Aws::CognitoIdentityProvider::Errors::NotAuthorizedException (Unable to verify secret hash for client 4d2c7274mc1bk4e9fr*****)
References
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/CognitoIdentityProvider/Client.html#sign_up-instance_method
If your app client is configured with a client secret, most of the client requests require you to include a 'secret hash' in the options parameters of the request. The Cognito docs describe the secret hash thusly:
The SecretHash value is a Base 64-encoded keyed-hash message
authentication code (HMAC) calculated using the secret key of a user
pool client and username plus the client ID in the message. The following pseudocode shows how this value is calculated.
Base64 ( HMAC_SHA256 ( "Client Secret Key", "Username" + "Client Id" ) )
The docs also make it clear via a glob of sample Java that you are expected to roll your own. After a bit of experimenting I was able to successfully complete a sign_up call with the following (my test pool was set up to require email and name attributes):
def secret_hash(client_secret, username, client_id)
Base64.strict_encode64(OpenSSL::HMAC.digest('sha256', CLIENT_SECRET, username + CLIENT_ID))
end
client = Aws::CognitoIdentityProvider::Client.new(
access_key_id: AWS_KEY,
secret_access_key: AWS_SECRET,
region: REGION)
username = 'bob.scum#example.com'
resp = client.sign_up({
client_id: CLIENT_ID,
username: username,
password: 'Password23sing!',
secret_hash: secret_hash(CLIENT_SECRET, username, CLIENT_ID),
user_attributes: [{ name: 'email', value: username },
{ name: 'name', value: 'Bob' }],
validation_data: [{ name: 'username', value: 'true' },
{ name: 'email', value: 'true' }]
})
CLIENT_SECRET is the app client secret that can be found under General Settings > App Clients.
Result:
#<struct Aws::CognitoIdentityProvider::Types::SignUpResponse
user_confirmed=false,
code_delivery_details=nil,
user_sub="c87c2ac8-1480-4d15-a28d-6998d9260e73">
Using Terraform 0.7.7.
I have a simple Terraform file with the following:
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "aws_instance" "personal" {
ami = "${lookup(var.amis, var.region)}"
instance_type = "t2.micro"
}
resource "aws_eip" "ip" {
instance = "${aws_instance.personal.id}"
}
resource "aws_key_pair" "personal" {
key_name = "mschuchard-us-east"
public_key = "${var.public_key}"
}
Terraform apply yields the following error:
aws_key_pair.personal: Creating...
fingerprint: "" => "<computed>"
key_name: "" => "mschuchard-us-east"
public_key: "" => "ssh-rsa pubkey hash mschuchard-us-east"
aws_instance.personal: Creating...
ami: "" => "ami-c481fad3"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
instance_state: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "<computed>"
network_interface_id: "" => "<computed>"
placement_group: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "<computed>"
source_dest_check: "" => "true"
subnet_id: "" => "<computed>"
tenancy: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
aws_instance.personal: Creation complete
aws_eip.ip: Creating...
allocation_id: "" => "<computed>"
association_id: "" => "<computed>"
domain: "" => "<computed>"
instance: "" => "i-0ab94b58b0089697d"
network_interface: "" => "<computed>"
private_ip: "" => "<computed>"
public_ip: "" => "<computed>"
vpc: "" => "<computed>"
aws_eip.ip: Creation complete
Error applying plan:
1 error(s) occurred:
* aws_key_pair.personal: Error import KeyPair: InvalidKeyPair.Duplicate: The keypair 'mschuchard-us-east' already exists.
status code: 400, request id: 51950b9a-55e8-4901-bf35-4d2be234abbf
The only help I found with googling was to blow away the *.tfstate files, which I tried and that did not help. I can launch an EC2 instance with the gui with this key pair and easily ssh into it, but Terraform is erroring when trying to use the same fully functional keypair.
The error is telling you that the keypair already exists in your AWS account but Terraform has no knowledge of it in its state files so is attempting to create it each time.
You have two options available to you here. Firstly, you could simply delete it from the AWS account and allow Terraform to upload it and thus allow it to be managed by Terraform and be in its state files.
Alternatively you could use the Terraform import command to import the pre-existing resource into your state file:
terraform import aws_key_pair.personal mschuchard-us-east
Use the ${uuid()} function to always get a random id for key pairs when generating, the selected/generated UUID makes it into the state file, so you will still be able to delete, but updating will not be possible. Every time you apply your terraform file there will be a new keypair generated...
While it is true that you can not generate a key pair from scratch using the AWS provider, you can generate a new key pair object in AWS using an RSA private key that the TLS provider generates.
resource "aws_key_pair" "test" {
key_name = "${uuid()}"
public_key = "${tls_private_key.t.public_key_openssh}"
}
provider "tls" {}
resource "tls_private_key" "t" {
algorithm = "RSA"
}
provider "local" {}
resource "local_file" "key" {
content = "${tls_private_key.t.private_key_pem}"
filename = "id_rsa"
provisioner "local-exec" {
command = "chmod 600 id_rsa"
}
}
Use the tls provider to generate a key, and import it as a new object every time.
Then export the private key so that you have it for access to the server(s) later on.
It is worthy to note that this breaks one of the paradigms that Terraform is attempting to use (infrastructure as code), but from a practical development standpoint that might be a bit too idealistic... Terraform builds fail midway through and states get invalidated. A better solution might be if the AWS plugin received an "already exists" error it imported automatically, or if that was an optional behavior that could be set.
The error says that key pair already exists in AWS, and it does not say whether it was created using Terraform or using console.
You should see it in AWS console EC2 -> Key Pairs for correct region. You should delete it using console before retrying import it using Terraform.
Amazon's documentation provides examples in Java, .NET, and PHP of how to use DynamoDB Local. How do you do the same thing with the AWS Ruby SDK?
My guess is that you pass in some parameters during initialization, but I can't figure out what they are.
dynamo_db = AWS::DynamoDB.new(
:access_key_id => '...',
:secret_access_key => '...')
Are you using v1 or v2 of the SDK? You'll need to find that out; from the short snippet above, it looks like v2. I've included both answers, just in case.
v1 answer:
AWS.config(use_ssl: false, dynamo_db: { api_verison: '2012-08-10', endpoint: 'localhost', port: '8080' })
dynamo_db = AWS::DynamoDB::Client.new
v2 answer:
require 'aws-sdk-core'
dynamo_db = Aws::DynamoDB::Client.new(endpoint: 'http://localhost:8080')
Change the port number as needed of course.
Now aws-sdk version 2.7 throws an error as Aws::Errors::MissingCredentialsError: unable to sign request without credentials set when keys are absent. So below code works for me
dynamo_db = Aws::DynamoDB::Client.new(
region: "your-region",
access_key_id: "anykey-or-xxx",
secret_access_key: "anykey-or-xxx",
endpoint: "http://localhost:8080"
)
I've written a simple gist that shows how to start, create, update and query a local dynamodb instance.
https://gist.github.com/SundeepK/4ffff773f92e3a430481
Heres a run down of some simple code:
Below is a simple command to run dynamoDb in memory
#Assuming you have downloading dynamoDBLocal and extracted into a dir called dynamodbLocal
java -Djava.library.path=./dynamodbLocal/DynamoDBLocal_lib -jar ./dynamodbLocal/DynamoDBLocal.jar -inMemory -port 9010
Below is a simple ruby script
require 'aws-sdk-core'
dynamo_db = Aws::DynamoDB::Client.new(region: "eu-west-1", endpoint: 'http://localhost:9010')
dynamo_db.create_table({
table_name: 'TestDB',
attribute_definitions: [{
attribute_name: 'SomeKey',
attribute_type: 'S'
},
{
attribute_name: 'epochMillis',
attribute_type: 'N'
}
],
key_schema: [{
attribute_name: 'SomeKey',
key_type: 'HASH'
},
{
attribute_name: 'epochMillis',
key_type: 'RANGE'
}
],
provisioned_throughput: {
read_capacity_units: 5,
write_capacity_units: 5
}
})
dynamo_db.put_item( table_name: "TestDB",
item: {
"SomeKey" => "somevalue1",
"epochMillis" => 1
})
puts dynamo_db.get_item({
table_name: "TestDB",
key: {
"SomeKey" => "somevalue",
"epochMillis" => 1
}}).item
The above will create a table with a range key and also add/query for the same data that was added. Not you must already have version 2 of the aws gem installed.
I want to create an EC2 cloudformation stack which basically can be described in the following steps:
1.- Launch instance
2.- Provision the instance
3.- Stop the instance and create an AMI image out of it
4.- Create an autoscaling group with the created AMI image as source to launch new instances.
Basically I can do 1 and 2 in one cloudformation template and 4 in a second template. What I don't seem able to do is to create an AMI image from an instance inside a cloudformation template, which basically generates the problem of having to manually remove the AMI if I want to remove the stack.
That being said, my questions are:
1.- Is there a way to create an AMI image from an instance INSIDE the cloudformation template?
2.- If the answer to 1 is no, is there a way to add an AMI image (or any other resource for that matter) to make it part of a completed stack?
EDIT:
Just to clarify, I've already solved the problem of creating the AMI and using it in a cloudformation template, I just can't create the AMI INSIDE the cloudformation template or add it somehow to the created stack.
As I commented on Rico's answer, what I do now is use an ansible playbook which basically has 3 steps:
1.- Create a base instance with a cloudformation template
2.- Create, using ansible, an AMI of the instance created on step 1
3.- Create the rest of the stack (ELB, autoscaling groups, etc) with a second cloudformation template that updates the one created on step 1, and that uses the AMI created on step 2 to launch instances.
This is how I manage it now, but I wanted to know if there's any way to create an AMI INSIDE a cloudformation template or if it's possible to add the created AMI to the stack (something like telling the stack, "Hey, this belongs to you as well, so handle it").
Yes, you can create an AMI from an EC2 instance within a CloudFormation template by implementing a Custom Resource that calls the CreateImage API on create (and calls the DeregisterImage and DeleteSnapshot APIs on delete).
Since AMIs can sometimes take a long time to create, a Lambda-backed Custom Resource will need to re-invoke itself if the wait has not completed before the Lambda function times out.
Here's a complete example:
Description: Create an AMI from an EC2 instance.
Parameters:
ImageId:
Description: Image ID for base EC2 instance.
Type: AWS::EC2::Image::Id
# amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2
Default: ami-9be6f38c
InstanceType:
Description: Instance type to launch EC2 instances.
Type: String
Default: m3.medium
AllowedValues: [ m3.medium, m3.large, m3.xlarge, m3.2xlarge ]
Resources:
# Completes when the instance is fully provisioned and ready for AMI creation.
AMICreate:
Type: AWS::CloudFormation::WaitCondition
CreationPolicy:
ResourceSignal:
Timeout: PT10M
Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref ImageId
InstanceType: !Ref InstanceType
UserData:
"Fn::Base64": !Sub |
#!/bin/bash -x
yum -y install mysql # provisioning example
/opt/aws/bin/cfn-signal \
-e $? \
--stack ${AWS::StackName} \
--region ${AWS::Region} \
--resource AMICreate
shutdown -h now
AMI:
Type: Custom::AMI
DependsOn: AMICreate
Properties:
ServiceToken: !GetAtt AMIFunction.Arn
InstanceId: !Ref Instance
AMIFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: !Sub |
var response = require('cfn-response');
var AWS = require('aws-sdk');
exports.handler = function(event, context) {
console.log("Request received:\n", JSON.stringify(event));
var physicalId = event.PhysicalResourceId;
function success(data) {
return response.send(event, context, response.SUCCESS, data, physicalId);
}
function failed(e) {
return response.send(event, context, response.FAILED, e, physicalId);
}
// Call ec2.waitFor, continuing if not finished before Lambda function timeout.
function wait(waiter) {
console.log("Waiting: ", JSON.stringify(waiter));
event.waiter = waiter;
event.PhysicalResourceId = physicalId;
var request = ec2.waitFor(waiter.state, waiter.params);
setTimeout(()=>{
request.abort();
console.log("Timeout reached, continuing function. Params:\n", JSON.stringify(event));
var lambda = new AWS.Lambda();
lambda.invoke({
FunctionName: context.invokedFunctionArn,
InvocationType: 'Event',
Payload: JSON.stringify(event)
}).promise().then((data)=>context.done()).catch((err)=>context.fail(err));
}, context.getRemainingTimeInMillis() - 5000);
return request.promise().catch((err)=>
(err.code == 'RequestAbortedError') ?
new Promise(()=>context.done()) :
Promise.reject(err)
);
}
var ec2 = new AWS.EC2(),
instanceId = event.ResourceProperties.InstanceId;
if (event.waiter) {
wait(event.waiter).then((data)=>success({})).catch((err)=>failed(err));
} else if (event.RequestType == 'Create' || event.RequestType == 'Update') {
if (!instanceId) { failed('InstanceID required'); }
ec2.waitFor('instanceStopped', {InstanceIds: [instanceId]}).promise()
.then((data)=>
ec2.createImage({
InstanceId: instanceId,
Name: event.RequestId
}).promise()
).then((data)=>
wait({
state: 'imageAvailable',
params: {ImageIds: [physicalId = data.ImageId]}
})
).then((data)=>success({})).catch((err)=>failed(err));
} else if (event.RequestType == 'Delete') {
if (physicalId.indexOf('ami-') !== 0) { return success({});}
ec2.describeImages({ImageIds: [physicalId]}).promise()
.then((data)=>
(data.Images.length == 0) ? success({}) :
ec2.deregisterImage({ImageId: physicalId}).promise()
).then((data)=>
ec2.describeSnapshots({Filters: [{
Name: 'description',
Values: ["*" + physicalId + "*"]
}]}).promise()
).then((data)=>
(data.Snapshots.length === 0) ? success({}) :
ec2.deleteSnapshot({SnapshotId: data.Snapshots[0].SnapshotId}).promise()
).then((data)=>success({})).catch((err)=>failed(err));
}
};
Runtime: nodejs4.3
Timeout: 300
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal: {Service: [lambda.amazonaws.com]}
Action: ['sts:AssumeRole']
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaRole
Policies:
- PolicyName: EC2Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'ec2:DescribeInstances'
- 'ec2:DescribeImages'
- 'ec2:CreateImage'
- 'ec2:DeregisterImage'
- 'ec2:DescribeSnapshots'
- 'ec2:DeleteSnapshot'
Resource: ['*']
Outputs:
AMI:
Value: !Ref AMI
For what it's worth, here's Python variant of wjordan's AMIFunction definition in the original answer. All other resources in the original yaml remain unchanged:
AMIFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: !Sub |
import logging
import cfnresponse
import json
import boto3
from threading import Timer
from botocore.exceptions import WaiterError
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def handler(event, context):
ec2 = boto3.resource('ec2')
physicalId = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else None
def success(data={}):
cfnresponse.send(event, context, cfnresponse.SUCCESS, data, physicalId)
def failed(e):
cfnresponse.send(event, context, cfnresponse.FAILED, str(e), physicalId)
logger.info('Request received: %s\n' % json.dumps(event))
try:
instanceId = event['ResourceProperties']['InstanceId']
if (not instanceId):
raise 'InstanceID required'
if not 'RequestType' in event:
success({'Data': 'Unhandled request type'})
return
if event['RequestType'] == 'Delete':
if (not physicalId.startswith('ami-')):
raise 'Unknown PhysicalId: %s' % physicalId
ec2client = boto3.client('ec2')
images = ec2client.describe_images(ImageIds=[physicalId])
for image in images['Images']:
ec2.Image(image['ImageId']).deregister()
snapshots = ([bdm['Ebs']['SnapshotId']
for bdm in image['BlockDeviceMappings']
if 'Ebs' in bdm and 'SnapshotId' in bdm['Ebs']])
for snapshot in snapshots:
ec2.Snapshot(snapshot).delete()
success({'Data': 'OK'})
elif event['RequestType'] in set(['Create', 'Update']):
if not physicalId: # AMI creation has not been requested yet
instance = ec2.Instance(instanceId)
instance.wait_until_stopped()
image = instance.create_image(Name="Automatic from CloudFormation stack ${AWS::StackName}")
physicalId = image.image_id
else:
logger.info('Continuing in awaiting image available: %s\n' % physicalId)
ec2client = boto3.client('ec2')
waiter = ec2client.get_waiter('image_available')
try:
waiter.wait(ImageIds=[physicalId], WaiterConfig={'Delay': 30, 'MaxAttempts': 6})
except WaiterError as e:
# Request the same event but set PhysicalResourceId so that the AMI is not created again
event['PhysicalResourceId'] = physicalId
logger.info('Timeout reached, continuing function: %s\n' % json.dumps(event))
lambda_client = boto3.client('lambda')
lambda_client.invoke(FunctionName=context.invoked_function_arn,
InvocationType='Event',
Payload=json.dumps(event))
return
success({'Data': 'OK'})
else:
success({'Data': 'OK'})
except Exception as e:
failed(e)
Runtime: python2.7
Timeout: 300
No.
I suppose Yes. Once the stack you can use the "Update Stack" operation. You need to provide the full JSON template of the initial stack + your changes in that same file (Changed AMI) I would run this in a test environment first (not production), as I'm not really sure what the operation does to the existing instances.
Why not create an AMI initially outside cloudformation and then use that AMI in your final cloudformation template ?
Another option is to write some automation to create two cloudformation stacks and you can delete the first one once the AMI that you've created is finalized.
While #wjdordan's solution is good for simple use cases, updating the User Data will not update the AMI.
(DISCLAIMER: I am the original author) cloudformation-ami aims at allowing you to declare AMIs in CloudFormation that can be reliably created, updated and deleted. Using cloudformation-ami You can declare custom AMIs like this:
MyAMI:
Type: Custom::AMI
Properties:
ServiceToken: !ImportValue AMILambdaFunctionArn
Image:
Name: my-image
Description: some description for the image
TemplateInstance:
ImageId: ami-467ca739
IamInstanceProfile:
Arn: arn:aws:iam::1234567890:instance-profile/MyProfile-ASDNSDLKJ
UserData:
Fn::Base64: !Sub |
#!/bin/bash -x
yum -y install mysql # provisioning example
# Signal that the instance is ready
INSTANCE_ID=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
aws ec2 create-tags --resources $INSTANCE_ID --tags Key=UserDataFinished,Value=true --region ${AWS::Region}
KeyName: my-key
InstanceType: t2.nano
SecurityGroupIds:
- sg-d7bf78b0
SubnetId: subnet-ba03aa91
BlockDeviceMappings:
- DeviceName: "/dev/xvda"
Ebs:
VolumeSize: '10'
VolumeType: gp2
My aws Profile looks like:
[my_aws_profile]
aws_access_key_id = XXXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
region = us-east-1
My call:
s3 = Aws::S3::Client.new(region: 'us-east-1', credentials: Aws::SharedCredentials.new(profile_name: 'my_aws_profile'))
s3.put_object_acl(
bucket: $global_config['bucket'], key: someFile_key_json, grant_read: grant_read)
Rerturns
Aws::S3::Errors::MissingSecurityHeader: Your request was missing a required header
What am I doing wrong? ...
How do I fix this?