I have a SAM template file that is throwing errors while doing sam build: [InvalidResourceException('MyFunction', "Type of property 'Events' is invalid.")]
First off, at the top of my file (at the same level as Globals) I have this event (the idea is to define a CloudWatch schedule that fires every 15 minutes and invokes a lambda):
Events:
Type: Schedule
Properties:
Schedule: rate(15 mins)
name: InvokeEvery15MinutesSchedule
Description: Invoke the target every 15 mins
Enabled: True
And here's what the function looks like:
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./path-to-code
Events:
- !Ref InvokeEvery15MinutesSchedule
I was doing this because I saw earlier that the following syntax is valid:
Globals:
Function:
Layers:
- !Ref Layer1
- !Ref Layer1
So, I thought that if I define an event at the top level and reference it inside the lambda, it will work. I want to keep it outside of the Lambda declaration because I want this to apply to several functions.
Can someone help with what I'm doing wrong?
"Events" is a lambda source object that defines the events that trigger this function. The object describing the source of events which trigger the function.
Try this:
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./path-to-code
Events:
RateSchedule:
Type: Schedule
Properties:
Schedule: rate(15 mins)
Name: InvokeEvery15MinutesSchedule
Description: Invoke the target every 15 mins
Enabled: True
Related
myFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: myFunction
Handler: myFunction.lambda_handler
myOtherFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: myOtherFunction
Handler: myOtherFunction.lambda_handler
I want to run a yq command such that for every Type:AWS::Serverless::Function resources, I'd like to grab the value of the Handler and make another attribute under properties called Environment.Variables.HANDLER.
I have following command so far.
yq '(.Resources.[] | select(.Type=="AWS::Serverless::Function") | .Properties.Environment.Variables.HANDLER) += (.Resources.[].Properties.Handler)' test.yaml
Which ends up with
myFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: myFunction
Handler: myFunction.lambda_handler
Environment:
Variables:
HANDLER: myOtherFunction.lambda_handler # This is wrong
myOtherFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: myOtherFunction
Handler: myOtherFunction.lambda_handler
Environment:
Variables:
HANDLER: myOtherFunction.lambda_handler
Where Environment.Variables.HANDLER is replaced with myOtherFunction's Handler for all the functions. How do I respectively grab the value from the particular resource to be replaced?
Use the update operator |= whenever you want to stay in context.
.Resources[]
|= select(.Type=="AWS::Serverless::Function").Properties
|= .Environment.Variables.HANDLER = .Handler
Or use the with function:
with(
.Resources[] | select(.Type=="AWS::Serverless::Function").Properties;
.Environment.Variables.HANDLER = .Handler
)
Both evaluate to:
Resources:
myFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: myFunction
Handler: myFunction.lambda_handler
Environment:
Variables:
HANDLER: myFunction.lambda_handler
myOtherFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: myOtherFunction
Handler: myOtherFunction.lambda_handler
Environment:
Variables:
HANDLER: myOtherFunction.lambda_handler
Note: In your approach, using .Resources.[] a second time started iterating all over again, so all matching items received the last value fetched.
I want to schedule one lambda via AWS EventBridge. The issue is I want to read the number value used in ScheduledExpression from SSM variable GCHeartbeatInterval
Code I used is below
heartbeat-check:
handler: groupconsultation/heartbeatcheck.handler
description: ${self:custom.gitVersion}
timeout: 15
memorySize: 1536
package:
include:
- groupconsultation/heartbeatcheck.js
- shared/*
- newrelic-lambda-wrapper.js
events:
- eventBridge:
enabled: true
schedule: rate(2 minutes)
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: 1
Description: value in minute. need to convert it to seconds/milliseconds
Is this possible to achieve in serverless.yml ?
Reason for reading it from SSM is, it's a heartbeat service and the same value will be used by FE to send a heartbeat in set interval. BE lambda needs to be triggerred after 2x heartbeat interval
It turns out it's not possible. Only solution to it was to pass the variable as a command line argument. something like below.
custom:
mySchedule: ${opt:mySchedule, 1} # Allow overrides from CLI
...
schedule: ${self:custom.mySchedule}
...
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: ${self:custom.mySchedule}
With the other approach, if we make it work we still have to redeploy the application as we do need to redeploy in this case also.
I create a DynamoDb table conditionally:
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
and this is how IsDevAccount is defined using an input parameter:
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
Now I'm creating a Lambda function that accepts the table's name (amongst other things) as input through environment variables. This is done conditionally, too. Within the function's code, I'd check if the table name is passed (pass empty if condition isn't met). If so, I'd put some items into it.
However, I'm not sure how to apply policy templates to the function's role conditionally. Normally I do it like this:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
TableName: !Ref MyDynamoTable
What happens to the function's execution role if the table isn't created because the condition isn't met (e.g.: in another account)? Can I apply this policy template conditionally, as well?
What I don't want to do is to blindly give write permission to all DynamoDB tables within the account.
Yes, you could add the condition to the DB write policy so that only when the condition is met, it will allow the write policy.
You're creating the table only if the environment is staging or development, you could apply a condition on the policy to check for your table name then apply the write policy. Example below
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Equals [ !Ref MyDynamoTable, "myTableName" ],
TableName: !Ref MyDynamoTable
Update in response to comments:
!Ref returns the value of the specified parameter or resource. We need parameters with allowed values for the environment and DBtable for the condition.
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- stage
- prod
MyDynamoTable:
Description: table name for the db
Type: String
AllowedValues:
- tableOne
- tableTwo
- myTableName
Conditions:
IsDevAccount: !Equals [ !Ref Environment, "dev" ]
TableExists: !Equals [ !Ref MyDynamoTable, "myTableName" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !And [IsDevAccount, TableExists] // Only with TableExists condition, it'll work fine with the added parameters
TableName: !Ref MyDynamoTable
Ref:- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
Update 2:
Agreed, I researched and confirmed, there is no way to check for resources created in the same stack template (That's why suggested parameter). Conditions are all parameter based.
However if the resource was created already in other stack, you could do this through Resource import. I don't think, resource import will be of help in your requirement.
However, a workaround would be to have Boolean parameters for TableExists condition and can pass the value through AWS CLI on the run like below,
MyDynamoTable:
Description: dynamo db table
Type: String
AllowedValues:
- true
- false
Conditions:
TableExists: !Equals [ !Ref MyDynamoTable, "true" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Ref TableExists
TableName: !Ref MyDynamoTable
AWS CLI on deploy pass required parameters
aws cloudformation deploy --template templateName.yml --parameter-overrides MyDynamoTable="true" dynamoDBtableName="myTableName" (any parameter required)
I'm trying to trigger a Lambda:alias (the alias is key here) on a schedule. The following code errors out with
"SampleLambdaLiveAlias is not valid. Reason: Provided Arn is not in
correct format. (Service: AmazonCloudWatchEvents; Status Code: 400;
Error Code: ValidationException;"
How do I properly target the lambda:alias in CloudFormation? I've tried !Ref, !Sub and just the logical name.
My custom-resource approach to retrieving the latest lambda version appears to be a necessary evil of setting up the "live" alias because AWS maintains old lambda versions, even after you delete the lambda and stack AND a valid version is required for a new alias. If anyone knows a more elegant approach to that problem, please see: how-to-use-sam-deploy-to-get-a-lambda-with-autopublishalias-and-additional-alises
SampleLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: SampleLambda
AutoPublishAlias: staging
CodeUri: src/
Handler: SampleLambda.handler
MemorySize: 512
Runtime: nodejs12.x
Role: !GetAtt SampleLambdaRole.Arn
SampleLambdaLiveAlias:
Type: AWS::Lambda::Alias
Properties:
FunctionName: !Ref SampleLambdaFunction
FunctionVersion: !GetAtt SampleLambdaGetMaxVersionFunction.version
Name: live
SampleLambdaFunctionScheduledEvent:
Type: AWS::Events::Rule
Properties:
State: ENABLED
ScheduleExpression: rate(1 minute) # same as cron(0/1 * * * ? *)
Description: Run SampleLambdaFunction once every 5 minutes.
Targets:
- Id: EventSampleLambda
Arn: SampleLambdaLiveAlias
Your error is in the last line of the piece of configuration you shared. In order to get the resource ARN you need to use Ref intrinsic function such as, !Ref SampleLambdaLiveAlias:
SampleLambdaFunctionScheduledEvent:
Type: AWS::Events::Rule
Properties:
State: ENABLED
ScheduleExpression: rate(1 minute) # same as cron(0/1 * * * ? *)
Description: Run SampleLambdaFunction once every 5 minutes.
Targets:
- Id: EventSampleLambda
Arn: !Ref SampleLambdaLiveAlias
Be aware that Ref intrinsic function may return different things for different types of resources. For Lambda alias it returns the ARN, just what you need.
You can check the official documentation for more detail.
Hello here is my project structure:
-AppName
-Common
-common.js //Global module which i'm using in all functions
-Func1
-index.js
-Func2
-index.js
-template.yaml
And here is template.yaml content:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
Func1:
Type: 'AWS::Serverless::Function'
Properties:
Handler: Func1/index.handler
Runtime: nodejs6.10
MemorySize: 512
Timeout: 10
Func2:
Type: 'AWS::Serverless::Function'
Properties:
Handler: Func2/index.handler
Runtime: nodejs6.10
MemorySize: 512
Timeout: 10
When i deploy for example Func2, result package contain all folders inside application, instead only Func2. Is it possible to configure through yaml file, what files will included in result package?
For example if i deploy Func2 i want to see in package next:
-Common
-common.js
-Func2
-index.js
You can use the CodeUri key in SAM and point it to the folder where your code lies for that one function.
So for this function, you'd want to do:
Func2:
Type: 'AWS::Serverless::Function'
Properties:
CodeUri: Func2
Handler: index.handler
Runtime: nodejs6.10
MemorySize: 512
Timeout: 10