AWS SAM - Apply policy template for a resource created conditionally - aws-lambda

I create a DynamoDb table conditionally:
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
and this is how IsDevAccount is defined using an input parameter:
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
Now I'm creating a Lambda function that accepts the table's name (amongst other things) as input through environment variables. This is done conditionally, too. Within the function's code, I'd check if the table name is passed (pass empty if condition isn't met). If so, I'd put some items into it.
However, I'm not sure how to apply policy templates to the function's role conditionally. Normally I do it like this:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
TableName: !Ref MyDynamoTable
What happens to the function's execution role if the table isn't created because the condition isn't met (e.g.: in another account)? Can I apply this policy template conditionally, as well?
What I don't want to do is to blindly give write permission to all DynamoDB tables within the account.

Yes, you could add the condition to the DB write policy so that only when the condition is met, it will allow the write policy.
You're creating the table only if the environment is staging or development, you could apply a condition on the policy to check for your table name then apply the write policy. Example below
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Equals [ !Ref MyDynamoTable, "myTableName" ],
TableName: !Ref MyDynamoTable
Update in response to comments:
!Ref returns the value of the specified parameter or resource. We need parameters with allowed values for the environment and DBtable for the condition.
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- stage
- prod
MyDynamoTable:
Description: table name for the db
Type: String
AllowedValues:
- tableOne
- tableTwo
- myTableName
Conditions:
IsDevAccount: !Equals [ !Ref Environment, "dev" ]
TableExists: !Equals [ !Ref MyDynamoTable, "myTableName" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !And [IsDevAccount, TableExists] // Only with TableExists condition, it'll work fine with the added parameters
TableName: !Ref MyDynamoTable
Ref:- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
Update 2:
Agreed, I researched and confirmed, there is no way to check for resources created in the same stack template (That's why suggested parameter). Conditions are all parameter based.
However if the resource was created already in other stack, you could do this through Resource import. I don't think, resource import will be of help in your requirement.
However, a workaround would be to have Boolean parameters for TableExists condition and can pass the value through AWS CLI on the run like below,
MyDynamoTable:
Description: dynamo db table
Type: String
AllowedValues:
- true
- false
Conditions:
TableExists: !Equals [ !Ref MyDynamoTable, "true" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Ref TableExists
TableName: !Ref MyDynamoTable
AWS CLI on deploy pass required parameters
aws cloudformation deploy --template templateName.yml --parameter-overrides MyDynamoTable="true" dynamoDBtableName="myTableName" (any parameter required)

Related

Is it possible to set EventBridge ScheduleExpression value from SSM in Serverless

I want to schedule one lambda via AWS EventBridge. The issue is I want to read the number value used in ScheduledExpression from SSM variable GCHeartbeatInterval
Code I used is below
heartbeat-check:
handler: groupconsultation/heartbeatcheck.handler
description: ${self:custom.gitVersion}
timeout: 15
memorySize: 1536
package:
include:
- groupconsultation/heartbeatcheck.js
- shared/*
- newrelic-lambda-wrapper.js
events:
- eventBridge:
enabled: true
schedule: rate(2 minutes)
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: 1
Description: value in minute. need to convert it to seconds/milliseconds
Is this possible to achieve in serverless.yml ?
Reason for reading it from SSM is, it's a heartbeat service and the same value will be used by FE to send a heartbeat in set interval. BE lambda needs to be triggerred after 2x heartbeat interval
It turns out it's not possible. Only solution to it was to pass the variable as a command line argument. something like below.
custom:
mySchedule: ${opt:mySchedule, 1} # Allow overrides from CLI
...
schedule: ${self:custom.mySchedule}
...
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: ${self:custom.mySchedule}
With the other approach, if we make it work we still have to redeploy the application as we do need to redeploy in this case also.

How do I access the Cognito UserPoolClient Secret in Lambda function?

I have created Cognito UserPool and UserpoolClient via Resources in serverless.yml file like this -
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
AccountRecoverySetting:
RecoveryMechanisms:
- Name: verified_email
Priority: 2
UserPoolName: ${self:provider.stage}-user-pool
UsernameAttributes:
- email
MfaConfiguration: OFF
Policies:
PasswordPolicy:
MinimumLength: 8
RequireLowercase: True
RequireNumbers: True
RequireSymbols: True
RequireUppercase: True
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: ${self:provider.stage}-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ALLOW_USER_PASSWORD_AUTH
- ALLOW_REFRESH_TOKEN_AUTH
GenerateSecret: true
Now I can pass the Userpool and UserpoolClient as environment variables to the lambda functions like this -
my_function:
package: {}
handler:
events:
- http:
path:<path>
method: post
cors: true
environment:
USER_POOL_ID: !Ref CognitoUserPool
USER_POOL_CLIENT_ID: !Ref CognitoUserPoolClient
I can access these IDs in my code as -
USER_POOL_ID = os.environ['USER_POOL_ID']
USER_POOL_CLIENT_ID = os.environ['USER_POOL_CLIENT_ID']
I have printed the values and they are being printed correctly. However, UserpoolClient also generates one AppClient secret which I need to use while generating secret hash. How shall I access app client secret (UserpoolClient's secret) in my lambda?
Probably now what you hoped for, but you cannot export client secret in CloudFormation explicitly. Take a look at the return values from AWS::Cognito::UserPoolClient. There you can only get the client ID.
What you could do is to create the client in another CF template and either create there a custom resource to read the secret and output it, or have an intermediate step where you get this value with CLI and then pass it into serverless.
There is currently no other option.

In CloudFormation, how do I target a Lambda alias in Events::Rule

I'm trying to trigger a Lambda:alias (the alias is key here) on a schedule. The following code errors out with
"SampleLambdaLiveAlias is not valid. Reason: Provided Arn is not in
correct format. (Service: AmazonCloudWatchEvents; Status Code: 400;
Error Code: ValidationException;"
How do I properly target the lambda:alias in CloudFormation? I've tried !Ref, !Sub and just the logical name.
My custom-resource approach to retrieving the latest lambda version appears to be a necessary evil of setting up the "live" alias because AWS maintains old lambda versions, even after you delete the lambda and stack AND a valid version is required for a new alias. If anyone knows a more elegant approach to that problem, please see: how-to-use-sam-deploy-to-get-a-lambda-with-autopublishalias-and-additional-alises
SampleLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: SampleLambda
AutoPublishAlias: staging
CodeUri: src/
Handler: SampleLambda.handler
MemorySize: 512
Runtime: nodejs12.x
Role: !GetAtt SampleLambdaRole.Arn
SampleLambdaLiveAlias:
Type: AWS::Lambda::Alias
Properties:
FunctionName: !Ref SampleLambdaFunction
FunctionVersion: !GetAtt SampleLambdaGetMaxVersionFunction.version
Name: live
SampleLambdaFunctionScheduledEvent:
Type: AWS::Events::Rule
Properties:
State: ENABLED
ScheduleExpression: rate(1 minute) # same as cron(0/1 * * * ? *)
Description: Run SampleLambdaFunction once every 5 minutes.
Targets:
- Id: EventSampleLambda
Arn: SampleLambdaLiveAlias
Your error is in the last line of the piece of configuration you shared. In order to get the resource ARN you need to use Ref intrinsic function such as, !Ref SampleLambdaLiveAlias:
SampleLambdaFunctionScheduledEvent:
Type: AWS::Events::Rule
Properties:
State: ENABLED
ScheduleExpression: rate(1 minute) # same as cron(0/1 * * * ? *)
Description: Run SampleLambdaFunction once every 5 minutes.
Targets:
- Id: EventSampleLambda
Arn: !Ref SampleLambdaLiveAlias
Be aware that Ref intrinsic function may return different things for different types of resources. For Lambda alias it returns the ARN, just what you need.
You can check the official documentation for more detail.

AWS Cloud Formation !Sub & !Ref functions inside AWS::Serverless::Function Policies

I have been using the !Sub function in my CloudFormation Yaml templates just fine. And when used it as an object property value it works for me
Object:
Property1: !Sub some-value-with-a-${variable}-in-it
The value of variable gets replaced as expected.
However, I can't figure out how to use the !Sub function in an element of a string array
Array:
- !Sub some-value-with-a-${variable}-in-it
That array element just gets ignored.
I am trying this in the context of a SAM template creating a AWS::Serverless::Function type resource. The Policies property can take an array of strings:
lambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: api
FunctionName: !Sub api-${MyStageName}
Handler: Lambda:Api.Function::HandleAsync
Runtime: dotnetcore1.0
Policies:
- AWSLambdaBasicExecutionRole
- !Sub arn:aws:iam::${AWS::AccountId}:policy/some-policy
- arn:aws:iam::123456789:policy/another-policy
The !Sub function works in the FunctionName property in this example. But I only end up with 2 Policies attached to my generated role - AWSLambdaBasicExecutionRole and arn:aws:iam::123456789:policy/another-policy. The one including the !Sub function gets ignored.
I have tried options like putting the function on a new line:
Array:
-
!Sub some value with a ${variable} in it
Can anyone help?
Update
#Tom Melo pointed out that this is not a array problem so I have adjusted my question.
Further investigation has revealed it is not a Cloud Formation issue exactly, but very specific to the AWS::Serverless::Function resource type, and the Policies property within in. I suspect it has something to do with the fact that the Policies property is so flexible in what it can accept. It can accept strings referring to policy names or Arns, and it can also accept policy documents describing new policy. I suspect this means it is not able to support the functions.
Apparently, there's nothing wrong with !Sub function in a array of elements.
I have tried to create the following stack on Cloudformation and it worked:
AWSTemplateFormatVersion: '2010-09-09'
Description: 'IAM Roles Template'
Parameters:
ArnBase:
Type: String
Default: arn:aws:iam::aws:policy/
AWSLambdaFullAccess:
Type: String
Default: AWSLambdaFullAccess
AmazonSESFullAccess:
Type: String
Default: AmazonSESFullAccess
Resources:
LambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
-
PolicyName: CloudFormationFullAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "cloudformation:*"
Resource: "*"
ManagedPolicyArns:
- !Sub ${ArnBase}${AWSLambdaFullAccess}
- !Sub ${ArnBase}${AmazonSESFullAccess}
- !Sub arn:aws:iam::${AWS::AccountId}:policy/CustomAmazonGlacierReadOnlyAccess
That should work using SAM either...
Workaround
The AWS::Serverless::Function resource type supports several ways of configuring access.
Refer to policies directly via the Policies property and have the framework create a role that has those policies.
The Role property can refer to a role which already contains policies.
I was using option 1, but option 2 proves to be a way around the issues with !Sub function.
Creating a role explicitly with the policies we want using the AWS::IAM::Role means we can use the !Sub function within the ManagedPolicyArns property. For example
role:
Type: AWS::IAM::Role
Properties:
...
ManagedPolicyArns:
- !Sub arn:aws:iam::${AWS::AccountId}:policy/some-policy
...
lambda:
Type: AWS::Serverless::Function
Properties:
...
Role: !GetAtt role.Arn
...

Auto-assign IPv6 address via AWS and CloudFormation

Is there any way to have IPv6 addresses auto-assigned to EC2 instances within an autoscaling group+launch configuration?
VPC and subnets are all set up for IPv6. Manually created instances are ok.
I can also manually assign them, but I can't seem to find a way to do it in CloudFormation.
The current status is that CloudFormation support for IPv6 is workable. Not fun or complete, but you can build a stack with it - I had to use 2 custom resources:
The first is a generic resource that I use for other things and also reused here, to work around the missing feature to construct a subnet /64 CIDR block from a VPC's /56 auto-provided network
The other I had to add specifically to work around a bug in the EC2 API that CloudFormation uses correctly.
Here is my setup:
1. Add IPv6 CIDR block to your VPC:
VPCipv6:
Type: "AWS::EC2::VPCCidrBlock"
Properties:
VpcId: !Ref VPC
AmazonProvidedIpv6CidrBlock: true
2. Extract the network prefix for creating /64 subnets:
As explained in this answer.
VPCipv6Prefix:
Type: Custom::Variable
Properties:
ServiceToken: !GetAtt [ IdentityFunc, Arn ]
Value: !Select [ 0, !Split [ "00::/", !Select [ 0, !GetAtt VPC.Ipv6CidrBlocks ] ] ]
IdentityFunc is an "identity function" implemented in Lambda for "custom variables", as described in this answer. Unlike this linked answer, I implement the function directly in the same stack so it is easier to maintain. See here for the gist.
3. Add an IPv6 default route to your internet gateway:
RouteInternet6:
Type: "AWS::EC2::Route"
Properties:
RouteTableId: !Ref RouteTableMain
DestinationIpv6CidrBlock: "::/0"
GatewayId: !Ref IGWPublicNet
DependsOn:
- IGWNetAttachment
IGWNetAttachment is a reference to the AWS::EC2::VPCGatewayAttachment defined in the stack. If you don't wait for it, the route may fail to be set properly
4. Add an IPv6 CIDR block to your subnets:
SubnetA:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Select [ 0, !GetAZs { Ref: "AWS::Region" } ]
CidrBlock: 172.20.0.0/24
MapPublicIpOnLaunch: true
# The following does not work if MapPublicIpOnLaunch because of EC2 bug
## AssignIpv6AddressOnCreation: true
Ipv6CidrBlock: !Sub "${VPCipv6Prefix.Value}00::/64"
VpcId:
Ref: VPC
Regarding the AssignIpv6AddressOnCreation being commented out - this is normally what you want to do, but apparently, there's a bug in the EC2 API that prevents this from working - through no fault of CloudFormation. This is documented in this AWS forums thread, as well as the solution which I'll present here next.
5. Fix the AssignIpv6AddressOnCreation problem with another lambda:
This is the lambda setup:
IPv6WorkaroundRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
Policies:
- PolicyName: !Sub "ipv6-fix-logs-${AWS::StackName}"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
- PolicyName: !Sub "ipv6-fix-modify-${AWS::StackName}"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- ec2:ModifySubnetAttribute
Resource: "*"
IPv6WorkaroundLambda:
Type: AWS::Lambda::Function
Properties:
Handler: "index.lambda_handler"
Code: #import cfnresponse below required to send respose back to CFN
ZipFile:
Fn::Sub: |
import cfnresponse
import boto3
def lambda_handler(event, context):
if event['RequestType'] is 'Delete':
cfnresponse.send(event, context, cfnresponse.SUCCESS)
return
responseValue = event['ResourceProperties']['SubnetId']
ec2 = boto3.client('ec2', region_name='${AWS::Region}')
ec2.modify_subnet_attribute(AssignIpv6AddressOnCreation={
'Value': True
},
SubnetId=responseValue)
responseData = {}
responseData['SubnetId'] = responseValue
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "CustomResourcePhysicalID")
Runtime: python2.7
Role: !GetAtt IPv6WorkaroundRole.Arn
Timeout: 30
And this is how you use it:
IPv6WorkaroundSubnetA:
Type: Custom::SubnetModify
Properties:
ServiceToken: !GetAtt IPv6WorkaroundLambda.Arn
SubnetId: !Ref SubnetA
This call races with the autoscaling group to complete the setup, but it is very unlikely to lose - I ran this a few dozen times and it never had a problem to set the field correctly before the first instance boots.
I had a very similar issue and had a chat with AWS Support concerning this. The current state is that IPv6 support in CloudFormation is very limited.
We ended up creating Custom Resources for lots of IPv6-specific things. We have a Custom Resource that:
Enables IPv6-allocation on a subnet
Creates an Egress-Only Internet Gateway
Adds a route to the Egress-Only Internet Gateway (the built-in Route resource says it "fails to stabilize" when pointing to an EIGW)
The Custom Resources are just Lambda functions that do the "raw" API call, and a IAM Role that grants the Lambda enough permissions to do that API call.

Resources