AWS MediaConnect with cloudformation , protocol error. What is the correct value regarding 'srt-listener' protocol? - yaml

I have this cloud formation template :
Resources:
MediaConnectFlowSource:
Type: 'AWS::MediaConnect::FlowSource'
Properties:
Description: SRTSource
Name: SRTSource
WhitelistCidr: 0.0.0.0/0
Protocol: srt-listener
MediaConnectFlow:
Type: 'AWS::MediaConnect::Flow'
Properties:
Name: testStream
Source: !Ref MediaConnectFlowSource
MediaConnectFlowOutput:
Type: 'AWS::MediaConnect::FlowOutput'
Properties:
CidrAllowList: 0.0.0.0/0
FlowArn: !Ref MediaConnectFlow
Name: SRTOutput
Protocol: srt-listener
I'm trying to create this resources , and following the AWS documentation for Media Connect with Cloud Formation this should work. Instead I'm getting this error:
Properties validation failed for resource MediaConnectFlowSource with message: #/Protocol: #: only 1 subschema matches out of 2 #/Protocol: failed validation constraint for keyword [enum]
For the documentation itself, regarding the enum allowed in the Cloud Formation template for Media Connect Flow Source, there is no actual values for the allowed values. It only shows the values which support failover like Zixi-push, RTP-FEC, RTP, and RIST.
I've tried changing the protocol name and realized that even writing random characters for the protocol would result in the same error. So the srt-listener value is not an actual protocol value ? But checking the SDK documentation and MediaConnect console there is an srt-listener enum value for protocol.
So since I want to use the srt-listener protocol , What would the actual value be for it ?
I've tried SRT-listener ,srt listener, SRT listener but I get the same error

You can check the valid values from the AWS CLI or CloudShell prompt, for the create-flow command, if you pass the 'help' parameter.
As of now, valid flow source types include:
"Protocol": "zixi-push"|"rtp-fec"|"rtp"|"zixi-pull"|"rist"|"st2110-jpegxs"|"cdi"|"srt-listener"|"srt-caller"|"fujitsu-qos"
I suggest tweaking the create-flow JSON until it works using a CLI command; then shifting that into a cloudformation stack template. This will help distinguish between a flow parameter error, and a cloudformation syntax issue.

Related

How to optionally apply environment configuration?

I want to optionally apply a VPC configuration based on whether an environment variable is set.
Something like this:
custom:
vpc:
securityGroupIds:
- ...
subnetIds:
- ...
functions:
main:
...
vpc: !If
- ${env:USE_VPC}
- ${self:custom.vpc}
- ~
I'd also like to do similar for alerts (optionally add emails to receive alerts) and other fields too.
How can this be done?
I've tried the above configuration and a variety of others but just receive various different errors
For example:
Configuration error:
at 'functions.main.vpc': must have required property 'securityGroupIds'
at 'functions.main.vpc': must have required property 'subnetIds'
at 'functions.main.vpc': unrecognized property 'Fn::If'
Currently, the best way to achieve such behavior is to use JS/TS-based configuration instead of YAML. With TS/JS, you get full power of a programming language to shape your configuration however you want, including use of such conditional checks to exclude certain parts of the configuration. It's not documented too well, but you can use this as a starting point: https://github.com/serverless/examples/tree/v3/legacy/aws-nodejs-typescript
In general, you can do whatever you want, as long as you export a valid object (or a promise that resolves to a valid object) with serverless configuration.

How to create the SAM template block for a Route53 alias record for a custom GatewayAPI domain

Creating a SAM template to creation of an API + Lambda. Simples!
Resources:
HelloWorldApi:
Type: AWS::Serverless::Api
Properties:
StageName: prod
DefinitionBody:
Fn::Transform:
Name: AWS::Include
Parameters:
Location: ./api.yaml
Throw into this a custom domain for the gateway and map it to the stage of the API.
Resources:
HelloWorldApi:
Type: AWS::Serverless::Api
Properties:
StageName: prod
DefinitionBody:
Fn::Transform:
Name: AWS::Include
Parameters:
Domain:
DomainName:
Fn::Sub: api-${HelloWorldApi.Stage}.custom-domain.com
CertificateArn: arn:aws:certificate...
If I was to do this via the console, after creating the custom domain, and mapping the stage, I must configure the DNS Alias record in Route53 for API and mapping
My question is how to create the SAM template block for a Route53 alias record for a custom GatewayAPI domain
Thanks to #lamanus for inspiring me to read the docs and see the wood for the trees.
The crux of the original OP was the reference to the mapped custom domain created by AWS::Serverless::Api Getting that reference is not obvious. That said, you don't need to if you create the Route53 in the AWS::Serverless::Api block like so.
HelloWorldApi:
Type: AWS::Serverless::Api
Properties:
StageName: prod
Domain:
DomainName:
Fn::Sub: api-${HelloWorldApi.Stage}.custom-domain.com
CertificateArn: arn:cert...
Route53:
HostedZoneName: custom-domain.com.
EvaluateTargetHealth: true
DefinitionBody:
Fn::Transform:
Name: AWS::Include
Parameters:
Location: ./api.yaml
This SAM resource will create a custom domain, and mapping, and the Route53 target alias.
You can use the CloudFormation template to create the Route 53 Record.
To get the endpoint, you can use the Ref function.
When the logical ID of this resource is provided to the Ref intrinsic function, it returns the ID of the underlying API Gateway API.
So, it is possible to rebuild the api gateway endpoint with the region value. Join the Ref function for the api gateway with the strings, regions such as:
!Join
- ''
- - !Ref HelloWorldApi
- .execute-api.
- !Ref AWS::Region (or specific value)
- .amazonaws.com
and then create a CNAME record to the Route 53 hosted zone. See the AWS docs.

Serverless Deploy fails: At least one of ProvisionedThroughput, ... is required

I am trying to deploy new Lambda Functions and API Gateways to AWS using the npm serverless package. The new functions are being deployed on top of previously existing functions, and new DynamoDB tables are being created along with the new lambda functions.
The deploy is failing with the following error:
An error occurred: authDB - At least one of ProvisionedThroughput, BillingMode, UpdateStreamEnabled, GlobalSecondaryIndexUpdates or SSESpecification or ReplicaUpdates is required (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
The 'authDB' is a table that already exists in DynamoDB. The serverless.yml file for this database table is as follows:
authDB:
Type: "AWS::DynamoDB::Table"
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
- AttributeName: key
AttributeType: S
KeySchema:
- AttributeName: key
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
TableName: "auth-db"
I'm not sure as to why I am receiving this error, since the 'ProvisionedThroughput' is defined.
[UPDATE] This authDB config is the same has not been changed since it was originally deployed... the only changes to the serverless.yml aside from the new functions/database resources is the addition of the serverless-plugin-split-stacks to bypass the CloudFormation 200 resource limit. This is the configuration of the serverless-plugin-split-stacks:
custom:
splitStacks:
perFunction: true
perType: false
perGroupFunction: false
In the documentation for serverless-plugin-split-stacks it states:
"Many kind of resources (as e.g. DynamoDB tables) cannot be freely moved between CloudFormation stacks (that can only be achieved via full removal and recreation of the stage)"
I am not 100% sure this is the error being thrown, with a bad message, but to test it out. I would try applying your CloudFormation templates to an empty, new AWS account and see if the succeed.

CloudFormation yaml - How to force number type?

I'm trying to create an ECS task definition as part of a CloudFormation stack.
My task definition so far looks like this...
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
RequiresCompatibilities:
- EC2
ExecutionRoleArn: !Ref MyTaskRole
ContainerDefinitions:
- Name: !Ref ServiceName
Image: amazon/amazon-ecs-sample
PortMappings:
- ContainerPort: 3000
HostPort: 0
Protocol: tcp
MemoryReservation: 128
When I try to run this, I get the following error...
#/ContainerDefinitions/0/MemoryReservation: expected type: Number, found: String
So it seems that CloudFormation is converting 128 to a string, and then the stack fails.
What is the correct way to define this value so that it remains a number?
It turned out that the error that was being reported by CloudFormation actually wasn't anything to do with the failure. The code above was perfectly fine.
In my case the problem was with the way I'd defined the logging section which appeared later in the template.
The takeaway from this, is that CloudFormation is very confusing to debug, and if you receive an error like this, don't assume it is what's actually causing the stack to fail.
To find the actual problem, I had to first remove the properties which were causing the type conversion error, MemoryReservation and PortMappings, and then it showed an error about the way I'd defined my logging section. After fixing that fault, I was able to re-add the other properties, and it worked fine.
I suspect now that because my logging section was incorrect, the whole ContainerDefinitions perhaps wasn't being parsed correctly, potentially causing the misleading type mismatch error.

Lambda backed custom resource cf template returns 'CREATE_FAILED'

The below lambda function is to associate a SNS topic to the existing directories, followed by a custom resource to invoke the lambda func itself. I see that the lambda creation is successful with the 'Register_event_topic' also completing. However, the stack fails after a while mostly because the 'custom resource failed to stabilize in expected time'; How can I ensure that the stack does not error out?
AWSTemplateFormatVersion: '2010-09-09'
#creating lambda function to register_event_topic
Description: Lambda function to register event topic with existing directory ID
Parameters:
RoleName:
Type: String
Description: "IAM Role used for Lambda execution"
Default: "arn:aws:iam::<<Accountnumber>>:role/LambdaExecutionRole"
EnvVariable:
Type: String
Description: "The Environment variable set for the lambda func"
Default: "ESdirsvcSNS"
Resources:
REGISTEREVENTTOPIC:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: dirsvc_snstopic_lambda
Handler: index.lambda_handler
Runtime: python3.6
Description: Lambda func code to assoc dirID with created SNS topic
Code:
ZipFile: |
import boto3
import os
import logging
dsclient = boto3.client('ds')
def lambda_handler(event, context):
response = dsclient.describe_directories()
directoryList = []
print(response)
for directoryList in response['DirectoryDescriptions']:
listTopics = dsclient.describe_event_topics(
DirectoryId=directoryList['DirectoryId']
)
eventTopics = listTopics['EventTopics']
topiclength = len(eventTopics)
if topiclength == 0:
response = dsclient.register_event_topic(
DirectoryId=directoryList['DirectoryId'],
TopicName= (os.environ['MONITORING_TOPIC_NAME'])
)
print(listTopics)
Timeout: 60
Environment:
Variables:
MONITORING_TOPIC_NAME: !Ref EnvVariable
Role: !Ref RoleName
InvokeLambda:
Type: Custom::InvokeLambda
Properties:
ServiceToken: !GetAtt REGISTEREVENTTOPIC.Arn
ReservedConcurrentExecutions: 1
Alas, writing a Custom Resource is not as simple as you'd initially think. Instead, special code must be added to post the response back to a URL.
You can see this in the sample Zip file provided on: Walkthrough: Looking Up Amazon Machine Image IDs - AWS CloudFormation
From the Custom Resources - AWS CloudFormation documentation:
The custom resource provider processes the AWS CloudFormation request and returns a response of SUCCESS or FAILED to the pre-signed URL. The custom resource provider provides the response in a JSON-formatted file and uploads it to the pre-signed S3 URL.
This is due to the asynchronous behaviour of CloudFormation. It doesn't simply call the Lambda function and then wait for a response. Rather, it triggers the Lambda function and the function must call back and trigger the next step in CloudFormation.
Your lambda doesn't support custom resource life cycle
In a Lambda backed custom resource, you implement your logic to
support creation, update and deletion of the resource. These
indications are sent from CloudFormation via the event and give you
information about the stack process.
In addition, you should also return your status back to CloudFormation
CloudFormation expects to get a response from your Lambda function after you're done with your logic. It will not continue with the deployment process if it doesn’t get a response, or at least until a 1 hour(!) timeout is reached. It can cost you a lot of time and frustration.
You can read more here

Resources