How to Access Athena QueryString From CloudFormation in Lambda? - aws-lambda

AWS-loaded question, but does anyone know what the proper way to access an Athena Query String (in CloudFormation) in Lambda?
I have set up both the Athena NamedQuery and the Lambda in CloudFormation. Abstracting out some of the more project-specific details, the general form I have is:
MyQuery:
Type: AWS::Athena::NamedQuery
Properties:
Database: "mydatabase"
Name: "DataQuery"
QueryString: SELECT * FROM mydatabase
MyLambda:
Type: AWS::Serverless::Function
Properties:
Handler: 'handlers.migration_handler'
Runtime: python3.6
CodeUri:
Bucket: BATS::SAM::CodeS3Bucket
Key: BATS::SAM::CodeS3Key
Description: Migrates data from output of Athena query to S3
Policies:
- AmazonS3FullAccess
- AWSLambdaExecute
- AmazonAthenaFullAccess
Environment:
Variables:
MY_QUERY:
Ref: MyQuery
When I'm writing the handler for the lambda, I want to call:
athena_client = boto3.client('athena')
response = athena_client.start_query_execution(
QueryString = os.environ['MY_QUERY']
ResultConfiguration = {'OutputLocation: 's3://my-bucket'}
)
However, QueryString needs to be a string, so this currently isn't working. I want to access the QueryString property in MY_QUERY, and I feel like I'm so close but I'm not quite sure how to get that last step. Any help here would be greatly appreciated.

Figured it out yesterday (or more specifically, my teammate figured it out).
Boto3 happens to have another method called get_named_query(NamedQueryId), and that returns a dictionary in the form of:
{
'NamedQuery': {
'Name': 'string',
'Description': 'string',
'Database': 'string',
'QueryString': 'string',
'NamedQueryId': 'string'
}
Thus, my code worked when I modified my lambda handler to:
athena_client = boto3.client('athena')
query_info = athena_client.get_named_query(
NamedQueryId = os.environ['MY_QUERY']
)
response = athena_client.start_query_execution(
QueryString = query_info['NamedQuery']['QueryString']
ResultConfiguration = {'OutputLocation: 's3://my-bucket'}
)

Related

How to connect to remote oracle database using typeorm in nestjs?

I was wondering how to connect to remote oracle database from nestjs using typeorm.
I installed typeorm and oracle package using following command.
npm i --save #nestjs/typeorm typeorm oracle
npm install oracledb --save
and then tried configuring in app.module.ts using TypeOrmModule.forRoot but it was not succesfull.
Here are my configuration settings.
TypeOrmModule.forRoot({
type: 'oracle',
host: 'ip of hostname',
port: port number,
username: 'username',
password: 'password',
serviceName: 'servicename',
synchronize: false,
entities: []
})
Can anybody help me out what am I missing? Also would like to know how can I execute the query once this connection is succesfully? If any example that would be helpfull.
Got it.
one missing thing was database name.
Added
database: 'databasename' in above configuration and it worked.
But, still my question is how to use this connection in service to fetch/push the data from/to oracle databse?
If you provide a name in your connection details you should be able to refer to the database connection using that. Otherwise, if no name is provided I believe it assigns it the name 'default'.
Basically these are the steps you should perform to use the database connection: (examples below each)
Create a model - this is how TypeORM knows to create a table.
export class Photo {
id: number
name: string
description: string
filename: string
views: number
isPublished: boolean
}
Create an Entity. - this should match your model, with the appropriate decorators. At minimum you should have the #Entity() decorator before your class definition and #Column() before each field.
import { Entity, Column } from "typeorm"
#Entity()
export class Photo {
#Column()
id: number
#Column()
name: string
#Column()
description: string
#Column()
filename: string
#Column()
views: number
#Column()
isPublished: boolean
}
Create your data source - looks like you have already done this. But I would give it a name field and you will need to pass your entities into the entity array you have.
const AppDataSource = new DataSource({
type: "postgres",
name: "photos",
host: "localhost",
port: 5432,
username: "root",
password: "admin",
database: "test",
entities: [Photo],
synchronize: true,
logging: false,
})
Then you can use repositories to manage data in the database:
const photo = new Photo()
photo.name = "Me and Bears"
photo.description = "I am near polar bears"
photo.filename = "photo-with-bears.jpg"
photo.views = 1
photo.isPublished = true
const photoRepository = AppDataSource.getRepository(Photo)
await photoRepository.save(photo)
console.log("Photo has been saved")
const savedPhotos = await photoRepository.find()
console.log("All photos from the db: ", savedPhotos)
For more details I would spend some time reading through the typeORM website, all the examples I pulled were from there:
https://typeorm.io/

SAM Template - define HttpApi with Lambda Authorizer and Simple Response

Description of the problem
I have created a Lambda function with API Gateway in SAM, then deployed it and it was working as expected. In API Gateway I used HttpApi not REST API.
Then, I wanted to add a Lambda authorizer with Simple Response. So, I followed the SAM and API Gateway docs and I came up with the code below.
When I call the route items-list it now returns 401 Unauthorized, which is expected.
However, when I add the header myappauth with the value "test-token-abc", I get a 500 Internal Server Error.
I checked this page but it seems all of the steps listed there are OK https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-http-lambda-integrations/
I enabled logging for the API Gateway, following these instructions: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-logging.html
But all I get is something like this (redacted my IP and request ID):
[MY-IP] - - [07/Jul/2021:08:24:06 +0000] "GET GET /items-list/{userNumber} HTTP/1.1" 500 35 [REQUEST-ID]
(Perhaps I can configure the logger in such a way that it prints a more meaningful error message? EDIT: I've tried adding $context.authorizer.error to the logs, but it doesn't print any specific error message, just prints a dash: -)
I also checked the logs for the Lambda functions, there is nothing there (all logs where from the time before I added the authorizer).
So, what am I doing wrong?
What I tried:
This is my Lambda Authorizer function which I have deployed using sam deploy, when I test it in isolation using an event with the myappauth header, it works:
exports.authorizer = async (event) => {
let response = {
"isAuthorized": false,
};
if (event.headers.myappauth === "test-token-abc") {
response = {
"isAuthorized": true,
};
}
return response;
};
and this is the SAM template.yml which I deployed using sam deploy:
AWSTemplateFormatVersion: 2010-09-09
Description: >-
myapp-v1
Transform:
- AWS::Serverless-2016-10-31
Globals:
Function:
Runtime: nodejs14.x
MemorySize: 128
Timeout: 100
Environment:
Variables:
MYAPP_TOKEN: "test-token-abc"
Resources:
MyAppAPi:
Type: AWS::Serverless::HttpApi
Properties:
FailOnWarnings: true
Auth:
Authorizers:
MyAppLambdaAuthorizer:
AuthorizerPayloadFormatVersion: "2.0"
EnableSimpleResponses: true
FunctionArn: !GetAtt authorizerFunction.Arn
FunctionInvokeRole: !GetAtt authorizerFunctionRole.Arn
Identity:
Headers:
- myappauth
DefaultAuthorizer: MyAppLambdaAuthorizer
itemsListFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/v1-handlers.itemsList
Description: A Lambda function that returns a list of items.
Policies:
- AWSLambdaBasicExecutionRole
Events:
Api:
Type: HttpApi
Properties:
Path: /items-list/{userNumber}
Method: get
ApiId: MyAppAPi
authorizerFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/v1-handlers.authorizer
Description: A Lambda function that authorizes requests.
Policies:
- AWSLambdaBasicExecutionRole
Edit:
User #petey suggested that I tried returning an IAM policy in my authorizer function, so I changed EnableSimpleResponses to false in the template.yml, then I changed my function as below, but got the same result:
exports.authorizer = async (event) => {
let response = {
"principalId": "my-user",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Deny",
"Resource": event.routeArn
}]
}
};
if (event.headers.myappauth == "test-token-abc") {
response = {
"principalId": "my-user",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": event.routeArn
}]
}
};
}
return response;
};
I am going to answer my own question because I have resolved the issue, and I hope this will help people who are going to use the new "HTTP API" format in API Gateway, since there is not a lot of tutorials out there yet; most examples you will find online are for the older API Gateway standard, which Amazon calls "REST API". (If you want to know the difference between the two, see here).
The main problem lies in the example that is presented in the official documentation. They have:
MyLambdaRequestAuthorizer:
FunctionArn: !GetAtt MyAuthFunction.Arn
FunctionInvokeRole: !GetAtt MyAuthFunctionRole.Arn
The problem with this, is that this template will create a new Role called MyAuthFunctionRole but that role will not have all the necessary policies attached to it!
The crucial part that I missed in the official docs is this paragraph:
You must grant API Gateway permission to invoke the Lambda function by using either the function's resource policy or an IAM role. For this example, we update the resource policy for the function so that it grants API Gateway permission to invoke our Lambda function.
The following command grants API Gateway permission to invoke your Lambda function. If API Gateway doesn't have permission to invoke your function, clients receive a 500 Internal Server Error.
The best way to solve this, is to actually include the Role definition in the SAM template.yml, under Resources:
MyAuthFunctionRole
Type: AWS::IAM::Role
Properties:
# [... other properties...]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- apigateway.amazonaws.com
Action:
- 'sts:AssumeRole'
Policies:
# here you will put the InvokeFunction policy, for example:
- PolicyName: MyPolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: 'lambda:InvokeFunction'
Resource: !GetAtt MyAuthFunction.Arn
You can see here a description about the various Properties for a role: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html
Another way to solve this, is to separately create a new policy in AWS Console, which has InvokeFunction permission, and then after deployment, attach that policy to the MyAuthFunctionRole that SAM created. Now the Authorizer will be working as expected.
Another strategy would be to create a new role beforehand, that has a policy with InvokeFunction permission, then copy and paste the arn of that role in the SAM template.yml:
MyLambdaRequestAuthorizer:
FunctionArn: !GetAtt MyAuthFunction.Arn
FunctionInvokeRole: arn:aws:iam::[...]
Your lambda authorizer is not returning what is expected to be an actual lambda authorizer (an IAM policy). This could explain that internal error 500.
To fix, replace is with something like this that returns an IAM policy (or rejects):
// A simple token-based authorizer example to demonstrate how to use an authorization token
// to allow or deny a request. In this example, the caller named 'user' is allowed to invoke
// a request if the client-supplied token value is 'allow'. The caller is not allowed to invoke
// the request if the token value is 'deny'. If the token value is 'unauthorized' or an empty
// string, the authorizer function returns an HTTP 401 status code. For any other token value,
// the authorizer returns an HTTP 500 status code.
// Note that token values are case-sensitive.
exports.handler = function(event, context, callback) {
var token = event.authorizationToken;
// modify switch statement here to your needs
switch (token) {
case 'allow':
callback(null, generatePolicy('user', 'Allow', event.methodArn));
break;
case 'deny':
callback(null, generatePolicy('user', 'Deny', event.methodArn));
break;
case 'unauthorized':
callback("Unauthorized"); // Return a 401 Unauthorized response
break;
default:
callback("Error: Invalid token"); // Return a 500 Invalid token response
}
};
// Help function to generate an IAM policy
var generatePolicy = function(principalId, effect, resource) {
var authResponse = {};
authResponse.principalId = principalId;
if (effect && resource) {
var policyDocument = {};
policyDocument.Version = '2012-10-17';
policyDocument.Statement = [];
var statementOne = {};
statementOne.Action = 'execute-api:Invoke';
statementOne.Effect = effect;
statementOne.Resource = resource;
policyDocument.Statement[0] = statementOne;
authResponse.policyDocument = policyDocument;
}
// Optional output with custom properties of the String, Number or Boolean type.
authResponse.context = {
"stringKey": "stringval",
"numberKey": 123,
"booleanKey": true
};
return authResponse;
}
Lots more information here : https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html#api-gateway-lambda-authorizer-lambda-function-create
Just to complete the answer. You have to add an AssumeRolePolicyDocument under Properties.
The role will then state
MyAuthFunctionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- apigateway.amazonaws.com
Action:
- 'sts:AssumeRole'
Policies:
# see answer above

Passing a path parameter to Google's Endpoint for Cloud Function

I am following Google's tutorial on setting up Google Cloud endpoint (not AWS API Gateway) in front of my Cloud Function. I am triggering my Cloud Function to trigger an AWS lambda function, AND I am trying to pass a path parameter from my Endpoint as defined by OpenAPI spec.
Path parameters are variable parts of a URL path. They are typically used to point to a specific resource within a collection, such as a user identified by ID. A URL can have several path parameters, each denoted with curly braces { }.
paths: /users/{id}:
get:
parameters:
- in: path
name: id # Note the name is the same as in the path
required: true
schema:
type: integer
GET /users/{id}
My openapi.yaml
swagger: '2.0'
info:
title: Cloud Endpoints + GCF
description: Sample API on Cloud Endpoints with a Google Cloud Functions backend
version: 1.0.0
host: HOST
x-google-endpoints:
- name: "HOST"
allowCors: "true
schemes:
- https
produces:
- application/json
paths:
/function1/{pathParameters}:
get:
operationId: function1
parameters:
- in: path
name: pathParameters
required: true
type: string
x-google-backend:
address: https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/function1
responses:
'200':
description: A successful response
schema:
type: string
The error I get when I use Endpoint URL https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/function1/conversations is a TypeError from my AWS lambda function
StatusCode:200, FunctionError: "Unhandled", ExecutedVersion: "$LATEST". Payload: "errorType":"TypeError", errorMessage:"Cannot read property 'startsWith' of undefined..."
It is saying that on line
var path = event.pathParameters;
...
...
if (path.startsWith('conversations/'){...};
my path var is undefined.
I initially thought my Google Function was not correctly passing pathParameters but when I tested my Google function using triggering event {"pathParameters":"conversations"}, my Lambda returns the payload successfully.
My Google Cloud Function:
let AWS = require('aws-sdk');
AWS.config.update({
accessKeyId: 'key',
secretAccessKey: 'secret',
region: 'region'
})
let lambda = new AWS.Lambda();
exports.helloWorld = async(req,res) => {
let params = {
FunctionName:'lambdafunction',
InvocationType: "RequestRespone",
Payload: JSON.stringify(req.body)
};
res.status(200).send(await lambda.invoke(params, function(err,data){
if(err){throw err}
else{
return data.Payload
}
}).promise());
}
EDIT 1:
Seeing this Google Group post, I tried adding to my openapi.yaml file path_translation: APPEND_PATH_TO_ADDRESS, yet still I'm having no luck.
...
paths:
/{pathParameters}:
get:
...
x-google-backend:
address: https://tomy.cloudfunctions.net/function-Name
path_translation: APPEND_PATH_TO_ADDRESS
#Arunmainthan Kamalanathan mentioned in the comments that testing in AWS and Google Cloud directly with trigger event {"pathParameters":"conversations"} is not equivalent to passing req.body from my Google function to AWS lambda. I think this is where my error is occurring -- I'm not correctly passing my path parameter in the payload.
EDIT 2:
There is this Stackoverflow post concerning passing route parameters to Cloud Functions using req.path. When I console.log(req.path) I get / and console.log(req.params) I get {'0': '' }, so for some reason my path parameter is not getting passed correctly from Cloud Endpoint URL to my Google function.
I was running into the same issue when specifying multiple paths/routes within my openapi.yaml. It all depends on where you place the x-google-backend (top-level versus operation-level). This has implications on the behaviour of the path_translation. You could also overwrite path_translation yourself, as the documentation clearly describes:
path_translation: [ APPEND_PATH_TO_ADDRESS | CONSTANT_ADDRESS ]
Optional. Sets the path translation strategy used by ESP when making
target backend requests.
Note: When x-google-backend is used at the top level of the OpenAPI
specification, path_translation defaults to APPEND_PATH_TO_ADDRESS,
and when x-google-backend is used at the operation level of the
OpenAPI specification, path_translation defaults to CONSTANT_ADDRESS.
For more details on path translation, please see the Understanding
path translation section.
This means that if you want the path to be appended as a path parameter instead of a query parameter in your backend, you should adhere to the following scenario's:
Scenario 1: Do you have one cloud function in the x-google-backend.address that handles all of your paths in the openapi specification? Put x-google-backend at the top-level.
Scenario 2: Do you have multiple cloud functions corresponding to different paths? Put x-google-backend at the operation-level and set x-google-backend.path_translation to APPEND_PATH_TO_ADDRESS.
When your invocation type is RequestRespone, you can access the payload directly from the event parameter of lambda.
Look at the `Payload' parameter of the Google function:
let params = {
FunctionName:'lambdafunction',
InvocationType: "RequestRespone",
Payload: JSON.stringify({ name: 'Arun'})
};
res.status(200).send(await lambda.invoke(params)...)
Also the Lambda part:
exports.handler = function(event, context) {
context.succeed('Hello ' + event.name);
};
I hope this helps.

CloudFormation Transform::Include parameters

I want to use AWS macro Transform::Include with some dynamic parameters for my file.
Resources:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
TestMacroVariable:
Default: 2
Type: Number
Location: !Sub "s3://${InstallBucketName}/test.yaml"
test.yaml:
DataAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
LaunchConfigurationName:
Ref: DataLaunchConfiguration
MinSize: '1'
MaxSize: '100'
DesiredCapacity:
Ref: TestMacroVariable
...
After calling: aws cloudformation describe-stack-events --stack-name $stack
I get:
"ResourceStatusReason": "The value of parameter TestMacroVariable
under transform Include must resolve to a string, number, boolean or a
list of any of these.. Rollback requested by user."
When I try to do it this way:
Resources:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
TestMacroVariable: 2
Location: !Sub "s3://${InstallBucketName}/test.yaml"
I get:
"ResourceStatusReason": "Template format error: Unresolved resource
dependencies [TestMacroVariable] in the Resources block of the
template"
Error is the same when I don't provide TestMacroVariable at all.
Tried with different types: String, Number, Boolean, List - none of them work.
As i know you cannot have anything other than Location key in the Parameters section of the AWS::Include. Check here AWS DOC
As an alternative, you can pass in the whole S3 path as a parameter and reference it in Location:
Parameters:
MyS3Path:
Type: String
Default: 's3://my-cf-templates/my-include.yaml'
...
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location: !Ref MyS3Path
Building on what #BatteryAcid Said you can refer the parameters in your Cloudformation template directly from your file using Sub function:
In your CF template :
Parameters:
TableName:
Type: String
Description: Table Name of the Dynamo DB Users table
Default: 'Users'
In the file you are including:
"Resource": [
{
"Fn::Sub": [
"arn:${AWS::Partition}:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${tableName}",
{
"tableName": {
"Ref": "TableName"
}
}
]
}
Alternatively doesn't have to be a parameter but any resource from your template :
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${QueryTelemtryFunction.Arn}/invocations

Create AMI image as part of a cloudformation stack

I want to create an EC2 cloudformation stack which basically can be described in the following steps:
1.- Launch instance
2.- Provision the instance
3.- Stop the instance and create an AMI image out of it
4.- Create an autoscaling group with the created AMI image as source to launch new instances.
Basically I can do 1 and 2 in one cloudformation template and 4 in a second template. What I don't seem able to do is to create an AMI image from an instance inside a cloudformation template, which basically generates the problem of having to manually remove the AMI if I want to remove the stack.
That being said, my questions are:
1.- Is there a way to create an AMI image from an instance INSIDE the cloudformation template?
2.- If the answer to 1 is no, is there a way to add an AMI image (or any other resource for that matter) to make it part of a completed stack?
EDIT:
Just to clarify, I've already solved the problem of creating the AMI and using it in a cloudformation template, I just can't create the AMI INSIDE the cloudformation template or add it somehow to the created stack.
As I commented on Rico's answer, what I do now is use an ansible playbook which basically has 3 steps:
1.- Create a base instance with a cloudformation template
2.- Create, using ansible, an AMI of the instance created on step 1
3.- Create the rest of the stack (ELB, autoscaling groups, etc) with a second cloudformation template that updates the one created on step 1, and that uses the AMI created on step 2 to launch instances.
This is how I manage it now, but I wanted to know if there's any way to create an AMI INSIDE a cloudformation template or if it's possible to add the created AMI to the stack (something like telling the stack, "Hey, this belongs to you as well, so handle it").
Yes, you can create an AMI from an EC2 instance within a CloudFormation template by implementing a Custom Resource that calls the CreateImage API on create (and calls the DeregisterImage and DeleteSnapshot APIs on delete).
Since AMIs can sometimes take a long time to create, a Lambda-backed Custom Resource will need to re-invoke itself if the wait has not completed before the Lambda function times out.
Here's a complete example:
Description: Create an AMI from an EC2 instance.
Parameters:
ImageId:
Description: Image ID for base EC2 instance.
Type: AWS::EC2::Image::Id
# amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2
Default: ami-9be6f38c
InstanceType:
Description: Instance type to launch EC2 instances.
Type: String
Default: m3.medium
AllowedValues: [ m3.medium, m3.large, m3.xlarge, m3.2xlarge ]
Resources:
# Completes when the instance is fully provisioned and ready for AMI creation.
AMICreate:
Type: AWS::CloudFormation::WaitCondition
CreationPolicy:
ResourceSignal:
Timeout: PT10M
Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref ImageId
InstanceType: !Ref InstanceType
UserData:
"Fn::Base64": !Sub |
#!/bin/bash -x
yum -y install mysql # provisioning example
/opt/aws/bin/cfn-signal \
-e $? \
--stack ${AWS::StackName} \
--region ${AWS::Region} \
--resource AMICreate
shutdown -h now
AMI:
Type: Custom::AMI
DependsOn: AMICreate
Properties:
ServiceToken: !GetAtt AMIFunction.Arn
InstanceId: !Ref Instance
AMIFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: !Sub |
var response = require('cfn-response');
var AWS = require('aws-sdk');
exports.handler = function(event, context) {
console.log("Request received:\n", JSON.stringify(event));
var physicalId = event.PhysicalResourceId;
function success(data) {
return response.send(event, context, response.SUCCESS, data, physicalId);
}
function failed(e) {
return response.send(event, context, response.FAILED, e, physicalId);
}
// Call ec2.waitFor, continuing if not finished before Lambda function timeout.
function wait(waiter) {
console.log("Waiting: ", JSON.stringify(waiter));
event.waiter = waiter;
event.PhysicalResourceId = physicalId;
var request = ec2.waitFor(waiter.state, waiter.params);
setTimeout(()=>{
request.abort();
console.log("Timeout reached, continuing function. Params:\n", JSON.stringify(event));
var lambda = new AWS.Lambda();
lambda.invoke({
FunctionName: context.invokedFunctionArn,
InvocationType: 'Event',
Payload: JSON.stringify(event)
}).promise().then((data)=>context.done()).catch((err)=>context.fail(err));
}, context.getRemainingTimeInMillis() - 5000);
return request.promise().catch((err)=>
(err.code == 'RequestAbortedError') ?
new Promise(()=>context.done()) :
Promise.reject(err)
);
}
var ec2 = new AWS.EC2(),
instanceId = event.ResourceProperties.InstanceId;
if (event.waiter) {
wait(event.waiter).then((data)=>success({})).catch((err)=>failed(err));
} else if (event.RequestType == 'Create' || event.RequestType == 'Update') {
if (!instanceId) { failed('InstanceID required'); }
ec2.waitFor('instanceStopped', {InstanceIds: [instanceId]}).promise()
.then((data)=>
ec2.createImage({
InstanceId: instanceId,
Name: event.RequestId
}).promise()
).then((data)=>
wait({
state: 'imageAvailable',
params: {ImageIds: [physicalId = data.ImageId]}
})
).then((data)=>success({})).catch((err)=>failed(err));
} else if (event.RequestType == 'Delete') {
if (physicalId.indexOf('ami-') !== 0) { return success({});}
ec2.describeImages({ImageIds: [physicalId]}).promise()
.then((data)=>
(data.Images.length == 0) ? success({}) :
ec2.deregisterImage({ImageId: physicalId}).promise()
).then((data)=>
ec2.describeSnapshots({Filters: [{
Name: 'description',
Values: ["*" + physicalId + "*"]
}]}).promise()
).then((data)=>
(data.Snapshots.length === 0) ? success({}) :
ec2.deleteSnapshot({SnapshotId: data.Snapshots[0].SnapshotId}).promise()
).then((data)=>success({})).catch((err)=>failed(err));
}
};
Runtime: nodejs4.3
Timeout: 300
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal: {Service: [lambda.amazonaws.com]}
Action: ['sts:AssumeRole']
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaRole
Policies:
- PolicyName: EC2Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'ec2:DescribeInstances'
- 'ec2:DescribeImages'
- 'ec2:CreateImage'
- 'ec2:DeregisterImage'
- 'ec2:DescribeSnapshots'
- 'ec2:DeleteSnapshot'
Resource: ['*']
Outputs:
AMI:
Value: !Ref AMI
For what it's worth, here's Python variant of wjordan's AMIFunction definition in the original answer. All other resources in the original yaml remain unchanged:
AMIFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: !Sub |
import logging
import cfnresponse
import json
import boto3
from threading import Timer
from botocore.exceptions import WaiterError
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def handler(event, context):
ec2 = boto3.resource('ec2')
physicalId = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else None
def success(data={}):
cfnresponse.send(event, context, cfnresponse.SUCCESS, data, physicalId)
def failed(e):
cfnresponse.send(event, context, cfnresponse.FAILED, str(e), physicalId)
logger.info('Request received: %s\n' % json.dumps(event))
try:
instanceId = event['ResourceProperties']['InstanceId']
if (not instanceId):
raise 'InstanceID required'
if not 'RequestType' in event:
success({'Data': 'Unhandled request type'})
return
if event['RequestType'] == 'Delete':
if (not physicalId.startswith('ami-')):
raise 'Unknown PhysicalId: %s' % physicalId
ec2client = boto3.client('ec2')
images = ec2client.describe_images(ImageIds=[physicalId])
for image in images['Images']:
ec2.Image(image['ImageId']).deregister()
snapshots = ([bdm['Ebs']['SnapshotId']
for bdm in image['BlockDeviceMappings']
if 'Ebs' in bdm and 'SnapshotId' in bdm['Ebs']])
for snapshot in snapshots:
ec2.Snapshot(snapshot).delete()
success({'Data': 'OK'})
elif event['RequestType'] in set(['Create', 'Update']):
if not physicalId: # AMI creation has not been requested yet
instance = ec2.Instance(instanceId)
instance.wait_until_stopped()
image = instance.create_image(Name="Automatic from CloudFormation stack ${AWS::StackName}")
physicalId = image.image_id
else:
logger.info('Continuing in awaiting image available: %s\n' % physicalId)
ec2client = boto3.client('ec2')
waiter = ec2client.get_waiter('image_available')
try:
waiter.wait(ImageIds=[physicalId], WaiterConfig={'Delay': 30, 'MaxAttempts': 6})
except WaiterError as e:
# Request the same event but set PhysicalResourceId so that the AMI is not created again
event['PhysicalResourceId'] = physicalId
logger.info('Timeout reached, continuing function: %s\n' % json.dumps(event))
lambda_client = boto3.client('lambda')
lambda_client.invoke(FunctionName=context.invoked_function_arn,
InvocationType='Event',
Payload=json.dumps(event))
return
success({'Data': 'OK'})
else:
success({'Data': 'OK'})
except Exception as e:
failed(e)
Runtime: python2.7
Timeout: 300
No.
I suppose Yes. Once the stack you can use the "Update Stack" operation. You need to provide the full JSON template of the initial stack + your changes in that same file (Changed AMI) I would run this in a test environment first (not production), as I'm not really sure what the operation does to the existing instances.
Why not create an AMI initially outside cloudformation and then use that AMI in your final cloudformation template ?
Another option is to write some automation to create two cloudformation stacks and you can delete the first one once the AMI that you've created is finalized.
While #wjdordan's solution is good for simple use cases, updating the User Data will not update the AMI.
(DISCLAIMER: I am the original author) cloudformation-ami aims at allowing you to declare AMIs in CloudFormation that can be reliably created, updated and deleted. Using cloudformation-ami You can declare custom AMIs like this:
MyAMI:
Type: Custom::AMI
Properties:
ServiceToken: !ImportValue AMILambdaFunctionArn
Image:
Name: my-image
Description: some description for the image
TemplateInstance:
ImageId: ami-467ca739
IamInstanceProfile:
Arn: arn:aws:iam::1234567890:instance-profile/MyProfile-ASDNSDLKJ
UserData:
Fn::Base64: !Sub |
#!/bin/bash -x
yum -y install mysql # provisioning example
# Signal that the instance is ready
INSTANCE_ID=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
aws ec2 create-tags --resources $INSTANCE_ID --tags Key=UserDataFinished,Value=true --region ${AWS::Region}
KeyName: my-key
InstanceType: t2.nano
SecurityGroupIds:
- sg-d7bf78b0
SubnetId: subnet-ba03aa91
BlockDeviceMappings:
- DeviceName: "/dev/xvda"
Ebs:
VolumeSize: '10'
VolumeType: gp2

Resources