How to create the SAM template block for a Route53 alias record for a custom GatewayAPI domain - aws-lambda

Creating a SAM template to creation of an API + Lambda. Simples!
Resources:
HelloWorldApi:
Type: AWS::Serverless::Api
Properties:
StageName: prod
DefinitionBody:
Fn::Transform:
Name: AWS::Include
Parameters:
Location: ./api.yaml
Throw into this a custom domain for the gateway and map it to the stage of the API.
Resources:
HelloWorldApi:
Type: AWS::Serverless::Api
Properties:
StageName: prod
DefinitionBody:
Fn::Transform:
Name: AWS::Include
Parameters:
Domain:
DomainName:
Fn::Sub: api-${HelloWorldApi.Stage}.custom-domain.com
CertificateArn: arn:aws:certificate...
If I was to do this via the console, after creating the custom domain, and mapping the stage, I must configure the DNS Alias record in Route53 for API and mapping
My question is how to create the SAM template block for a Route53 alias record for a custom GatewayAPI domain

Thanks to #lamanus for inspiring me to read the docs and see the wood for the trees.
The crux of the original OP was the reference to the mapped custom domain created by AWS::Serverless::Api Getting that reference is not obvious. That said, you don't need to if you create the Route53 in the AWS::Serverless::Api block like so.
HelloWorldApi:
Type: AWS::Serverless::Api
Properties:
StageName: prod
Domain:
DomainName:
Fn::Sub: api-${HelloWorldApi.Stage}.custom-domain.com
CertificateArn: arn:cert...
Route53:
HostedZoneName: custom-domain.com.
EvaluateTargetHealth: true
DefinitionBody:
Fn::Transform:
Name: AWS::Include
Parameters:
Location: ./api.yaml
This SAM resource will create a custom domain, and mapping, and the Route53 target alias.

You can use the CloudFormation template to create the Route 53 Record.
To get the endpoint, you can use the Ref function.
When the logical ID of this resource is provided to the Ref intrinsic function, it returns the ID of the underlying API Gateway API.
So, it is possible to rebuild the api gateway endpoint with the region value. Join the Ref function for the api gateway with the strings, regions such as:
!Join
- ''
- - !Ref HelloWorldApi
- .execute-api.
- !Ref AWS::Region (or specific value)
- .amazonaws.com
and then create a CNAME record to the Route 53 hosted zone. See the AWS docs.

Related

AWS MediaConnect with cloudformation , protocol error. What is the correct value regarding 'srt-listener' protocol?

I have this cloud formation template :
Resources:
MediaConnectFlowSource:
Type: 'AWS::MediaConnect::FlowSource'
Properties:
Description: SRTSource
Name: SRTSource
WhitelistCidr: 0.0.0.0/0
Protocol: srt-listener
MediaConnectFlow:
Type: 'AWS::MediaConnect::Flow'
Properties:
Name: testStream
Source: !Ref MediaConnectFlowSource
MediaConnectFlowOutput:
Type: 'AWS::MediaConnect::FlowOutput'
Properties:
CidrAllowList: 0.0.0.0/0
FlowArn: !Ref MediaConnectFlow
Name: SRTOutput
Protocol: srt-listener
I'm trying to create this resources , and following the AWS documentation for Media Connect with Cloud Formation this should work. Instead I'm getting this error:
Properties validation failed for resource MediaConnectFlowSource with message: #/Protocol: #: only 1 subschema matches out of 2 #/Protocol: failed validation constraint for keyword [enum]
For the documentation itself, regarding the enum allowed in the Cloud Formation template for Media Connect Flow Source, there is no actual values for the allowed values. It only shows the values which support failover like Zixi-push, RTP-FEC, RTP, and RIST.
I've tried changing the protocol name and realized that even writing random characters for the protocol would result in the same error. So the srt-listener value is not an actual protocol value ? But checking the SDK documentation and MediaConnect console there is an srt-listener enum value for protocol.
So since I want to use the srt-listener protocol , What would the actual value be for it ?
I've tried SRT-listener ,srt listener, SRT listener but I get the same error
You can check the valid values from the AWS CLI or CloudShell prompt, for the create-flow command, if you pass the 'help' parameter.
As of now, valid flow source types include:
"Protocol": "zixi-push"|"rtp-fec"|"rtp"|"zixi-pull"|"rist"|"st2110-jpegxs"|"cdi"|"srt-listener"|"srt-caller"|"fujitsu-qos"
I suggest tweaking the create-flow JSON until it works using a CLI command; then shifting that into a cloudformation stack template. This will help distinguish between a flow parameter error, and a cloudformation syntax issue.

How to rename AWS Lambda Function Name without changing its Function URL with SAM?

I am working with the following AWS SAM Template.
Resources:
PaymentFunction:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: payment_function
CodeUri: PaymentFunction/
Description: 'A lambda function to do a payment'
...
...
...
FunctionUrlConfig:
AuthType: NONE
Cors:
AllowOrigins:
- "*"
AllowHeaders:
- "*"
AllowMethods:
- "*"
Outputs:
PaymentFunctionUrl:
Value:
Fn::GetAtt: PaymentFunctionUrl.FunctionUrl
When I deploy this function with aws deploy command. I get the following function url
https://{random_string}.lambda-url.{aws_region}.on.aws/
whenever I change the LogicalResourceId i.e. PaymentFunction or actual function name i.e. payment_function, it creates a new {random_string}. that means a new function URL.
Is it possible to change the function_name without changing function URL?
I don't think it is possible. The CFN docs for the FunctionName property of an AWS Lambda function clearly states that the updating of the name requires Replacement of the resource. This means that the old resource will be deleted and a new resource will be created with a new function URL.
Changing the LogicalResourceId of any cfn resource will automatically create a new resource in your AWS account and delete the old one. For Lambda functions, this always results in a different function URL.
If you want to invoke lambdas using URLs that you have more control over (and can easily change the lambda function that gets executed for each path), have a look at the REST API of the Amazon API Gateway service.

How to add environment variables in template.yaml in a secured way?

When creating Lambda function through the SAM CLI using template.yaml, I have to pass few environment variables, and they shouldn't be exposed on GitHub. Is there any way I can refer the environment variables in template.yaml through the .env file?
I didnt find any sources for the same.
Sample code snippet from template.yaml:
Properties:
CodeUri: student /
FunctionName: list
Handler: index.listHandler
Runtime: nodejs14.x
Environment:
Variables:
MONGODB_URI: mongodb://username:pwd
There are few options here.
Add them to the Parameters section of the template (be sure to add the NoEcho option) and pass them in at the time of deploying.
A slightly better option is to use Secrets Manager to store the value and then use dynamic references in the template. CloudFormation will retrieve the values from Secrets Manager for you, at the time you deploy.
A better option is to not pass them as environment variables at all (since anyone with permissions to view the function will be able to see the value). Instead, use Secrets Manager to store the value and look up the value in the code. If you decide to use this approach be sure to cache the value so that you can at least reuse it between warm starts of the lambda.
One more option is to encrypt the value using KMS, and pass in the encrypted (Base64 encoded) value to the function. You'll need to call KMS decrypt to get the decrypted value. This operation is pretty fast, and isn't likely to be throttled. I would still cache the value to help speed things up between warm starts.
By extension of #Jason's answer 2. here a full working example:
template.yaml
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: My test secrets manager dynamic reference SAM template/ Cloudformation stack
Resources:
# lambdas
myLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub ${AWS::StackName}-myLambda
Runtime: nodejs12.x
Handler: index.handler
CodeUri: ./src/handlers/myLambda
MemorySize: 128
Timeout: 10
Environment:
Variables:
someSecret: '{{resolve:secretsmanager:somePreviouslyStoredSecret}}'
src/handlers/myLambda/index.js
const { someSecret } = process.env;
exports.handler = (event, context, callback) => {
if (someSecret) callback(null, `secret: ${someSecret}`);
callback(`Unexpected error, secret: ${someSecret}`);
};

Serverless Deploy fails: At least one of ProvisionedThroughput, ... is required

I am trying to deploy new Lambda Functions and API Gateways to AWS using the npm serverless package. The new functions are being deployed on top of previously existing functions, and new DynamoDB tables are being created along with the new lambda functions.
The deploy is failing with the following error:
An error occurred: authDB - At least one of ProvisionedThroughput, BillingMode, UpdateStreamEnabled, GlobalSecondaryIndexUpdates or SSESpecification or ReplicaUpdates is required (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
The 'authDB' is a table that already exists in DynamoDB. The serverless.yml file for this database table is as follows:
authDB:
Type: "AWS::DynamoDB::Table"
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
- AttributeName: key
AttributeType: S
KeySchema:
- AttributeName: key
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
TableName: "auth-db"
I'm not sure as to why I am receiving this error, since the 'ProvisionedThroughput' is defined.
[UPDATE] This authDB config is the same has not been changed since it was originally deployed... the only changes to the serverless.yml aside from the new functions/database resources is the addition of the serverless-plugin-split-stacks to bypass the CloudFormation 200 resource limit. This is the configuration of the serverless-plugin-split-stacks:
custom:
splitStacks:
perFunction: true
perType: false
perGroupFunction: false
In the documentation for serverless-plugin-split-stacks it states:
"Many kind of resources (as e.g. DynamoDB tables) cannot be freely moved between CloudFormation stacks (that can only be achieved via full removal and recreation of the stage)"
I am not 100% sure this is the error being thrown, with a bad message, but to test it out. I would try applying your CloudFormation templates to an empty, new AWS account and see if the succeed.

Lambda backed custom resource cf template returns 'CREATE_FAILED'

The below lambda function is to associate a SNS topic to the existing directories, followed by a custom resource to invoke the lambda func itself. I see that the lambda creation is successful with the 'Register_event_topic' also completing. However, the stack fails after a while mostly because the 'custom resource failed to stabilize in expected time'; How can I ensure that the stack does not error out?
AWSTemplateFormatVersion: '2010-09-09'
#creating lambda function to register_event_topic
Description: Lambda function to register event topic with existing directory ID
Parameters:
RoleName:
Type: String
Description: "IAM Role used for Lambda execution"
Default: "arn:aws:iam::<<Accountnumber>>:role/LambdaExecutionRole"
EnvVariable:
Type: String
Description: "The Environment variable set for the lambda func"
Default: "ESdirsvcSNS"
Resources:
REGISTEREVENTTOPIC:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: dirsvc_snstopic_lambda
Handler: index.lambda_handler
Runtime: python3.6
Description: Lambda func code to assoc dirID with created SNS topic
Code:
ZipFile: |
import boto3
import os
import logging
dsclient = boto3.client('ds')
def lambda_handler(event, context):
response = dsclient.describe_directories()
directoryList = []
print(response)
for directoryList in response['DirectoryDescriptions']:
listTopics = dsclient.describe_event_topics(
DirectoryId=directoryList['DirectoryId']
)
eventTopics = listTopics['EventTopics']
topiclength = len(eventTopics)
if topiclength == 0:
response = dsclient.register_event_topic(
DirectoryId=directoryList['DirectoryId'],
TopicName= (os.environ['MONITORING_TOPIC_NAME'])
)
print(listTopics)
Timeout: 60
Environment:
Variables:
MONITORING_TOPIC_NAME: !Ref EnvVariable
Role: !Ref RoleName
InvokeLambda:
Type: Custom::InvokeLambda
Properties:
ServiceToken: !GetAtt REGISTEREVENTTOPIC.Arn
ReservedConcurrentExecutions: 1
Alas, writing a Custom Resource is not as simple as you'd initially think. Instead, special code must be added to post the response back to a URL.
You can see this in the sample Zip file provided on: Walkthrough: Looking Up Amazon Machine Image IDs - AWS CloudFormation
From the Custom Resources - AWS CloudFormation documentation:
The custom resource provider processes the AWS CloudFormation request and returns a response of SUCCESS or FAILED to the pre-signed URL. The custom resource provider provides the response in a JSON-formatted file and uploads it to the pre-signed S3 URL.
This is due to the asynchronous behaviour of CloudFormation. It doesn't simply call the Lambda function and then wait for a response. Rather, it triggers the Lambda function and the function must call back and trigger the next step in CloudFormation.
Your lambda doesn't support custom resource life cycle
In a Lambda backed custom resource, you implement your logic to
support creation, update and deletion of the resource. These
indications are sent from CloudFormation via the event and give you
information about the stack process.
In addition, you should also return your status back to CloudFormation
CloudFormation expects to get a response from your Lambda function after you're done with your logic. It will not continue with the deployment process if it doesn’t get a response, or at least until a 1 hour(!) timeout is reached. It can cost you a lot of time and frustration.
You can read more here

Resources