Variable in Namespace for ElasticBeanstalk CFT - yaml

I have a CloudFormation template yml file that passes OptionSettings for an ElasticBeanstalk applicaiton. I can hard code values, and I can pass values from Parameters. However, I am unable to determine how to pass Parameters or Variables as the namespace.
This works:
- Namespace: aws:elasticbeanstalk:environment:process:lbtargetgroup
OptionName: Port
Value: 3000
This works (where PORTNUMBER is a parameter)
Parameters:
PORTNUMBER:
Type: String
Description: Port number
ElasticBeanstalkConfig:
Properties:
OptionSettings:
- Namespace: aws:elasticbeanstalk:environment:process:lbtargetgroup
OptionName: Port
Value: !Ref PORTNUMBER
However, this does not work (where LBTARGETGROUP is a parameter):
Parameters:
LBTARGETGROUP:
Type: String
Description: Target Group Name
ElasticBeanstalkConfig:
Properties:
OptionSettings:
- Namespace: aws:elasticbeanstalk:environment:process:!Ref LBTARGETGROUP
OptionName: Port
Value: 3000
From what I have tried, you cannot use typical Variables in a CFT (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html#template-anatomy-sections). I have also tried mappings. But I can't seem to figure out how to pass the name as a parameter.

The answer is often very simple... the following works:
Parameters:
LBTARGETGROUP:
Type: String
Description: Target Group Name
ElasticBeanstalkConfig:
Properties:
OptionSettings:
- Namespace: !Join ["", ["aws:elasticbeanstalk:environment:process:", Ref: LBTARGETGROUP]]
OptionName: Port
Value: 3000

Related

Is it possible to set EventBridge ScheduleExpression value from SSM in Serverless

I want to schedule one lambda via AWS EventBridge. The issue is I want to read the number value used in ScheduledExpression from SSM variable GCHeartbeatInterval
Code I used is below
heartbeat-check:
handler: groupconsultation/heartbeatcheck.handler
description: ${self:custom.gitVersion}
timeout: 15
memorySize: 1536
package:
include:
- groupconsultation/heartbeatcheck.js
- shared/*
- newrelic-lambda-wrapper.js
events:
- eventBridge:
enabled: true
schedule: rate(2 minutes)
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: 1
Description: value in minute. need to convert it to seconds/milliseconds
Is this possible to achieve in serverless.yml ?
Reason for reading it from SSM is, it's a heartbeat service and the same value will be used by FE to send a heartbeat in set interval. BE lambda needs to be triggerred after 2x heartbeat interval
It turns out it's not possible. Only solution to it was to pass the variable as a command line argument. something like below.
custom:
mySchedule: ${opt:mySchedule, 1} # Allow overrides from CLI
...
schedule: ${self:custom.mySchedule}
...
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: ${self:custom.mySchedule}
With the other approach, if we make it work we still have to redeploy the application as we do need to redeploy in this case also.

Pass a CloudFormation YAML list via a JSON string parameter

I am attempting to import an existing load balancer into a CloudFormation stack. The listeners must be specified as a YAML list, but there is no CloudFormation parameter type for list (array) or object, so the parameter for the YAML list must be a string. This is causing the following CloudFormation error
Value of property Listeners must be of type List
The value of the string parameter for the listeners is set using the CLI -
aws elb describe-load-balancers --load-balancer-names $ELB_DNS_NAME --query 'LoadBalancerDescriptions[0].ListenerDescriptions[].Listener' | jq --compact-output '.' | sed -e 's/"/\\"/g'
Notice that the resultant JSON from the above command is escaped. I suspect that this is the root cause of the issue.
[
...
{
"ParameterKey": "ElbListeners",
"ParameterValue": "[{\"Protocol\":\"TCP\",\"LoadBalancerPort\":443,\"InstanceProtocol\":\"TCP\",\"InstancePort\":31672},{\"Protocol\":\"TCP\",\"LoadBalancerPort\":80,\"InstanceProtocol\":\"TCP\",\"InstancePort\":30545}]"
},
...
]
CloudFormation doesn't seem to offer any way of un-escaping the string parameter, so the following template fails.
AWSTemplateFormatVersion: 2010-09-09
Resources:
...
IngressLoadBalancer:
Type: AWS::ElasticLoadBalancing::LoadBalancer
DeletionPolicy: Delete
Properties:
Listeners: !Ref ElbListeners
LoadBalancerName: !Ref ElbName
Parameters:
...
ElbListeners:
Type: String
Description: Listeners for the load balancer
Default: ""
ElbName:
Type: String
Description: Name of the load balancer
Default: ""
Replacing quotes in the resultant JSON with ${quote} in the parameters file, and then replacing ${quote} with quotes using !Sub fails. It seems that the first input for !Sub can't be !Ref ParameterName.
I don't know how many listeners there will be, so it's not feasible to hardcode a list of listeners in the template and pass in multiple parameters for the ports/protocols.
How can I pass a YAML list as a JSON string parameter?
You can take the content of the ElbListeners parameter and simply insert it into the template, removing it from your Parameters. The resulting template would look like:
AWSTemplateFormatVersion: 2010-09-09
Resources:
...
IngressLoadBalancer:
Type: AWS::ElasticLoadBalancing::LoadBalancer
DeletionPolicy: Delete
Properties:
Listeners:
- Protocol: TCP
LoadBalancerPort: 443
InstanceProtocol: TCP
InstancePort: 31672
- Protocol: TCP
LoadBalancerPort: 80
InstanceProtocol: TCP
InstancePort: 30545
LoadBalancerName: !Ref ElbName
Parameters:
...
ElbName:
Type: String
Description: Name of the load balancer
Default: ""

AWS SAM - Apply policy template for a resource created conditionally

I create a DynamoDb table conditionally:
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
and this is how IsDevAccount is defined using an input parameter:
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
Now I'm creating a Lambda function that accepts the table's name (amongst other things) as input through environment variables. This is done conditionally, too. Within the function's code, I'd check if the table name is passed (pass empty if condition isn't met). If so, I'd put some items into it.
However, I'm not sure how to apply policy templates to the function's role conditionally. Normally I do it like this:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
TableName: !Ref MyDynamoTable
What happens to the function's execution role if the table isn't created because the condition isn't met (e.g.: in another account)? Can I apply this policy template conditionally, as well?
What I don't want to do is to blindly give write permission to all DynamoDB tables within the account.
Yes, you could add the condition to the DB write policy so that only when the condition is met, it will allow the write policy.
You're creating the table only if the environment is staging or development, you could apply a condition on the policy to check for your table name then apply the write policy. Example below
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Equals [ !Ref MyDynamoTable, "myTableName" ],
TableName: !Ref MyDynamoTable
Update in response to comments:
!Ref returns the value of the specified parameter or resource. We need parameters with allowed values for the environment and DBtable for the condition.
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- stage
- prod
MyDynamoTable:
Description: table name for the db
Type: String
AllowedValues:
- tableOne
- tableTwo
- myTableName
Conditions:
IsDevAccount: !Equals [ !Ref Environment, "dev" ]
TableExists: !Equals [ !Ref MyDynamoTable, "myTableName" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !And [IsDevAccount, TableExists] // Only with TableExists condition, it'll work fine with the added parameters
TableName: !Ref MyDynamoTable
Ref:- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
Update 2:
Agreed, I researched and confirmed, there is no way to check for resources created in the same stack template (That's why suggested parameter). Conditions are all parameter based.
However if the resource was created already in other stack, you could do this through Resource import. I don't think, resource import will be of help in your requirement.
However, a workaround would be to have Boolean parameters for TableExists condition and can pass the value through AWS CLI on the run like below,
MyDynamoTable:
Description: dynamo db table
Type: String
AllowedValues:
- true
- false
Conditions:
TableExists: !Equals [ !Ref MyDynamoTable, "true" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Ref TableExists
TableName: !Ref MyDynamoTable
AWS CLI on deploy pass required parameters
aws cloudformation deploy --template templateName.yml --parameter-overrides MyDynamoTable="true" dynamoDBtableName="myTableName" (any parameter required)

How do I access the Cognito UserPoolClient Secret in Lambda function?

I have created Cognito UserPool and UserpoolClient via Resources in serverless.yml file like this -
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
AccountRecoverySetting:
RecoveryMechanisms:
- Name: verified_email
Priority: 2
UserPoolName: ${self:provider.stage}-user-pool
UsernameAttributes:
- email
MfaConfiguration: OFF
Policies:
PasswordPolicy:
MinimumLength: 8
RequireLowercase: True
RequireNumbers: True
RequireSymbols: True
RequireUppercase: True
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: ${self:provider.stage}-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ALLOW_USER_PASSWORD_AUTH
- ALLOW_REFRESH_TOKEN_AUTH
GenerateSecret: true
Now I can pass the Userpool and UserpoolClient as environment variables to the lambda functions like this -
my_function:
package: {}
handler:
events:
- http:
path:<path>
method: post
cors: true
environment:
USER_POOL_ID: !Ref CognitoUserPool
USER_POOL_CLIENT_ID: !Ref CognitoUserPoolClient
I can access these IDs in my code as -
USER_POOL_ID = os.environ['USER_POOL_ID']
USER_POOL_CLIENT_ID = os.environ['USER_POOL_CLIENT_ID']
I have printed the values and they are being printed correctly. However, UserpoolClient also generates one AppClient secret which I need to use while generating secret hash. How shall I access app client secret (UserpoolClient's secret) in my lambda?
Probably now what you hoped for, but you cannot export client secret in CloudFormation explicitly. Take a look at the return values from AWS::Cognito::UserPoolClient. There you can only get the client ID.
What you could do is to create the client in another CF template and either create there a custom resource to read the secret and output it, or have an intermediate step where you get this value with CLI and then pass it into serverless.
There is currently no other option.

How to use the security group existing in horizon in heat template

I'm newbies on heat yaml template loaded by OpenStack
I've got this command which works fine :
openstack server create --image RHEL-7.4 --flavor std.cpu1ram1 --nic net-id=network-name.admin-network --security-group security-name.group-sec-default value instance-name
I tried to write this heat file with the command above :
heat_template_version: 2014-10-16
description: Simple template to deploy a single compute instance with an attached volume
resources:
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_group:
- security_group: security-name.group-sec-default
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: security-name.group-sec-default
my_volume:
type: OS::Cinder::Volume
properties:
size: 10
my_attachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: my_instance }
volume_id: { get_resource: my_volume }
mountpoint: /dev/vdb
The stack creation failed with the following message error :
openstack stack create -t my_first.yaml First_stack
openstack stack show First_stack
.../...
| stack_status_reason | Resource CREATE failed: BadRequest: resources.my_instance: Unable to find security_group with name or id 'sec_group1' (HTTP 400) (Request-ID: req-1c5d041c-2254-4e43-8785-c421319060d0)
.../...
Thanks for helping,
According to the template guide it is expecting the rules type is of list.
So, change the content of template as below for security-group:
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: [security-name.group-sec-default]
OR
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules:
- security-name.group-sec-default
After digging, I finally found what was wrong in my heat file. I had to declare my instance like this :
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_groups: [security-name.group-sec-default]
Thanks for your support

Resources