This is my cloudFormation template.
Description: Create a variable number of EC2 instance resources.
Parameters:
InstanceCount:
Description: Number of EC2 instances (must be between 1 and 3).
Type: Number
Default: 1
MinValue: 1
MaxValue: 3
ConstraintDescription: Must be a number between 1 and 3.
Description: launch EC2 instances.
Type: AWS::EC2::Instance
InstanceType:
Description: Launch EC2 instances.
Type: String
Default: t2.micro
AllowedValues: [ t2.micro ]
Conditions:
Launch1: !Equals [1, 1]
Launch2: !Not [!Equals [1, !Ref InstanceCount]]
Launch3: !Or
- !Not [!Equals [1, !Ref InstanceCount]]
- !Not [!Equals [2, !Ref InstanceCount]]
**Resources:**
Instance1:
Condition: Launch1
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
AvailabilityZone: us-east-1a
ImageId: ami-a4c7edb2
Instance2:
Condition: Launch2
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
AvailabilityZone: us-east-1b
ImageId: ami-a4c7edb2
Instance3:
Condition: Launch3
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
AvailabilityZone: us-east-1c
ImageId: ami-a4c7edb2
Error Message
Template contains errors.: Invalid template property or properties [InstanceType]
Can someone please help me to find why am I getting this error?
Thank you
If you observe closely, the indentation of the template was messed up at InstanceType after the InstanceCount. Fix it as mentioned below and you should be good to go.
InstanceCount:
Description: Number of EC2 instances (must be between 1 and 3).
Type: Number
Default: 1
MinValue: 1
MaxValue: 3
ConstraintDescription: Must be a number between 1 and 3.
Description: launch EC2 instances.
Type: AWS::EC2::Instance
InstanceType:
Description: Launch EC2 instances.
Type: String
Default: t2.micro
AllowedValues: [ t2.micro ]
Hope this helps.
Related
I create a DynamoDb table conditionally:
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
and this is how IsDevAccount is defined using an input parameter:
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
Now I'm creating a Lambda function that accepts the table's name (amongst other things) as input through environment variables. This is done conditionally, too. Within the function's code, I'd check if the table name is passed (pass empty if condition isn't met). If so, I'd put some items into it.
However, I'm not sure how to apply policy templates to the function's role conditionally. Normally I do it like this:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
TableName: !Ref MyDynamoTable
What happens to the function's execution role if the table isn't created because the condition isn't met (e.g.: in another account)? Can I apply this policy template conditionally, as well?
What I don't want to do is to blindly give write permission to all DynamoDB tables within the account.
Yes, you could add the condition to the DB write policy so that only when the condition is met, it will allow the write policy.
You're creating the table only if the environment is staging or development, you could apply a condition on the policy to check for your table name then apply the write policy. Example below
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Equals [ !Ref MyDynamoTable, "myTableName" ],
TableName: !Ref MyDynamoTable
Update in response to comments:
!Ref returns the value of the specified parameter or resource. We need parameters with allowed values for the environment and DBtable for the condition.
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- stage
- prod
MyDynamoTable:
Description: table name for the db
Type: String
AllowedValues:
- tableOne
- tableTwo
- myTableName
Conditions:
IsDevAccount: !Equals [ !Ref Environment, "dev" ]
TableExists: !Equals [ !Ref MyDynamoTable, "myTableName" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !And [IsDevAccount, TableExists] // Only with TableExists condition, it'll work fine with the added parameters
TableName: !Ref MyDynamoTable
Ref:- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
Update 2:
Agreed, I researched and confirmed, there is no way to check for resources created in the same stack template (That's why suggested parameter). Conditions are all parameter based.
However if the resource was created already in other stack, you could do this through Resource import. I don't think, resource import will be of help in your requirement.
However, a workaround would be to have Boolean parameters for TableExists condition and can pass the value through AWS CLI on the run like below,
MyDynamoTable:
Description: dynamo db table
Type: String
AllowedValues:
- true
- false
Conditions:
TableExists: !Equals [ !Ref MyDynamoTable, "true" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Ref TableExists
TableName: !Ref MyDynamoTable
AWS CLI on deploy pass required parameters
aws cloudformation deploy --template templateName.yml --parameter-overrides MyDynamoTable="true" dynamoDBtableName="myTableName" (any parameter required)
How to add a scaling policy to an auto scaling group either new or existing using Cloud Formation or AWS CLI
There are a significant examples of this, but below is a snippet from one of my existing cloud formation templates.
1) Parameters
You should take minimum and maximum as a parameter
2) The autoscale group itself
I include it below, but if you didn't want to include it you could take it as a parameter. You can also use a condition that would either use the existing of the parameter as a condition to determine whether the ASG should be created. Please note if you do use the condition, you will also use that condition on all references with an in statement (to determine if using local ASG in template or the parameter).
3) Alarms
This is the key element of auto-scale group - determining the alarms. I'm using memory reservation of the cluster, but I would say CPU is the most common. You can use any metric cloudwatch monitors and even custom metrics.
4) Policy
I'm currently quickly reacting up and down...it takes about 30-60s for a new instance to make an impact and that is why I have 120s between events. You need to understand your system to correctly choose the right amount to avoid over scaling.
ECSClusterAutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Condition: notDedicated
Properties:
VPCZoneIdentifier:
- 'Fn::ImportValue': !Sub '${VPC}-PrivateSubnet1'
- 'Fn::ImportValue': !Sub '${VPC}-PrivateSubnet2'
- 'Fn::ImportValue': !Sub '${VPC}-PrivateSubnet3'
MinSize: !Ref MinSize
MaxSize: !Ref MaxSize
HealthCheckGracePeriod: '600'
HealthCheckType: EC2
LaunchConfigurationName: !Ref ECSLaunchConfiguration
MetricsCollection:
- Granularity: 1Minute
ECSClusterScaleOutPolicy:
Type: 'AWS::AutoScaling::ScalingPolicy'
Condition: AutoScaleNotDedicated
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: !Ref ECSClusterAutoScalingGroup
Cooldown: '120'
ScalingAdjustment: '1'
ECSClusterScaleOutAlarm:
Type: 'AWS::CloudWatch::Alarm'
Condition: AutoScaleNotDedicated
Properties:
EvaluationPeriods: '1'
Statistic: Average
Threshold: '70'
AlarmDescription: Scale up alarm when Memory Reservation > 70% for 1 minute
Period: '60'
AlarmActions:
- !Ref ECSClusterScaleOutPolicy
Namespace: AWS/ECS
Dimensions:
- Name: ClusterName
Value: !Ref ECSCluster
ComparisonOperator: GreaterThanThreshold
MetricName: MemoryReservation
ECSClusterScaleInPolicy:
Type: 'AWS::AutoScaling::ScalingPolicy'
Condition: AutoScaleNotDedicated
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: !Ref ECSClusterAutoScalingGroup
Cooldown: '120'
ScalingAdjustment: '-1'
ECSClusterScaleInAlarm:
Type: 'AWS::CloudWatch::Alarm'
Condition: AutoScaleNotDedicated
Properties:
EvaluationPeriods: '1'
Statistic: Average
Threshold: '45'
AlarmDescription: Scale down alarm when Memory Reservation <= 45% for 5 minutes
Period: '300'
AlarmActions:
- !Ref ECSClusterScaleInPolicy
Namespace: AWS/ECS
Dimensions:
- Name: ClusterName
Value: !Ref ECSCluster
ComparisonOperator: LessThanOrEqualToThreshold
MetricName: MemoryReservation
I'm newbies on heat yaml template loaded by OpenStack
I've got this command which works fine :
openstack server create --image RHEL-7.4 --flavor std.cpu1ram1 --nic net-id=network-name.admin-network --security-group security-name.group-sec-default value instance-name
I tried to write this heat file with the command above :
heat_template_version: 2014-10-16
description: Simple template to deploy a single compute instance with an attached volume
resources:
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_group:
- security_group: security-name.group-sec-default
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: security-name.group-sec-default
my_volume:
type: OS::Cinder::Volume
properties:
size: 10
my_attachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: my_instance }
volume_id: { get_resource: my_volume }
mountpoint: /dev/vdb
The stack creation failed with the following message error :
openstack stack create -t my_first.yaml First_stack
openstack stack show First_stack
.../...
| stack_status_reason | Resource CREATE failed: BadRequest: resources.my_instance: Unable to find security_group with name or id 'sec_group1' (HTTP 400) (Request-ID: req-1c5d041c-2254-4e43-8785-c421319060d0)
.../...
Thanks for helping,
According to the template guide it is expecting the rules type is of list.
So, change the content of template as below for security-group:
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: [security-name.group-sec-default]
OR
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules:
- security-name.group-sec-default
After digging, I finally found what was wrong in my heat file. I had to declare my instance like this :
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_groups: [security-name.group-sec-default]
Thanks for your support
I have been using the !Sub function in my CloudFormation Yaml templates just fine. And when used it as an object property value it works for me
Object:
Property1: !Sub some-value-with-a-${variable}-in-it
The value of variable gets replaced as expected.
However, I can't figure out how to use the !Sub function in an element of a string array
Array:
- !Sub some-value-with-a-${variable}-in-it
That array element just gets ignored.
I am trying this in the context of a SAM template creating a AWS::Serverless::Function type resource. The Policies property can take an array of strings:
lambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: api
FunctionName: !Sub api-${MyStageName}
Handler: Lambda:Api.Function::HandleAsync
Runtime: dotnetcore1.0
Policies:
- AWSLambdaBasicExecutionRole
- !Sub arn:aws:iam::${AWS::AccountId}:policy/some-policy
- arn:aws:iam::123456789:policy/another-policy
The !Sub function works in the FunctionName property in this example. But I only end up with 2 Policies attached to my generated role - AWSLambdaBasicExecutionRole and arn:aws:iam::123456789:policy/another-policy. The one including the !Sub function gets ignored.
I have tried options like putting the function on a new line:
Array:
-
!Sub some value with a ${variable} in it
Can anyone help?
Update
#Tom Melo pointed out that this is not a array problem so I have adjusted my question.
Further investigation has revealed it is not a Cloud Formation issue exactly, but very specific to the AWS::Serverless::Function resource type, and the Policies property within in. I suspect it has something to do with the fact that the Policies property is so flexible in what it can accept. It can accept strings referring to policy names or Arns, and it can also accept policy documents describing new policy. I suspect this means it is not able to support the functions.
Apparently, there's nothing wrong with !Sub function in a array of elements.
I have tried to create the following stack on Cloudformation and it worked:
AWSTemplateFormatVersion: '2010-09-09'
Description: 'IAM Roles Template'
Parameters:
ArnBase:
Type: String
Default: arn:aws:iam::aws:policy/
AWSLambdaFullAccess:
Type: String
Default: AWSLambdaFullAccess
AmazonSESFullAccess:
Type: String
Default: AmazonSESFullAccess
Resources:
LambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
-
PolicyName: CloudFormationFullAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "cloudformation:*"
Resource: "*"
ManagedPolicyArns:
- !Sub ${ArnBase}${AWSLambdaFullAccess}
- !Sub ${ArnBase}${AmazonSESFullAccess}
- !Sub arn:aws:iam::${AWS::AccountId}:policy/CustomAmazonGlacierReadOnlyAccess
That should work using SAM either...
Workaround
The AWS::Serverless::Function resource type supports several ways of configuring access.
Refer to policies directly via the Policies property and have the framework create a role that has those policies.
The Role property can refer to a role which already contains policies.
I was using option 1, but option 2 proves to be a way around the issues with !Sub function.
Creating a role explicitly with the policies we want using the AWS::IAM::Role means we can use the !Sub function within the ManagedPolicyArns property. For example
role:
Type: AWS::IAM::Role
Properties:
...
ManagedPolicyArns:
- !Sub arn:aws:iam::${AWS::AccountId}:policy/some-policy
...
lambda:
Type: AWS::Serverless::Function
Properties:
...
Role: !GetAtt role.Arn
...
Is there any way to have IPv6 addresses auto-assigned to EC2 instances within an autoscaling group+launch configuration?
VPC and subnets are all set up for IPv6. Manually created instances are ok.
I can also manually assign them, but I can't seem to find a way to do it in CloudFormation.
The current status is that CloudFormation support for IPv6 is workable. Not fun or complete, but you can build a stack with it - I had to use 2 custom resources:
The first is a generic resource that I use for other things and also reused here, to work around the missing feature to construct a subnet /64 CIDR block from a VPC's /56 auto-provided network
The other I had to add specifically to work around a bug in the EC2 API that CloudFormation uses correctly.
Here is my setup:
1. Add IPv6 CIDR block to your VPC:
VPCipv6:
Type: "AWS::EC2::VPCCidrBlock"
Properties:
VpcId: !Ref VPC
AmazonProvidedIpv6CidrBlock: true
2. Extract the network prefix for creating /64 subnets:
As explained in this answer.
VPCipv6Prefix:
Type: Custom::Variable
Properties:
ServiceToken: !GetAtt [ IdentityFunc, Arn ]
Value: !Select [ 0, !Split [ "00::/", !Select [ 0, !GetAtt VPC.Ipv6CidrBlocks ] ] ]
IdentityFunc is an "identity function" implemented in Lambda for "custom variables", as described in this answer. Unlike this linked answer, I implement the function directly in the same stack so it is easier to maintain. See here for the gist.
3. Add an IPv6 default route to your internet gateway:
RouteInternet6:
Type: "AWS::EC2::Route"
Properties:
RouteTableId: !Ref RouteTableMain
DestinationIpv6CidrBlock: "::/0"
GatewayId: !Ref IGWPublicNet
DependsOn:
- IGWNetAttachment
IGWNetAttachment is a reference to the AWS::EC2::VPCGatewayAttachment defined in the stack. If you don't wait for it, the route may fail to be set properly
4. Add an IPv6 CIDR block to your subnets:
SubnetA:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Select [ 0, !GetAZs { Ref: "AWS::Region" } ]
CidrBlock: 172.20.0.0/24
MapPublicIpOnLaunch: true
# The following does not work if MapPublicIpOnLaunch because of EC2 bug
## AssignIpv6AddressOnCreation: true
Ipv6CidrBlock: !Sub "${VPCipv6Prefix.Value}00::/64"
VpcId:
Ref: VPC
Regarding the AssignIpv6AddressOnCreation being commented out - this is normally what you want to do, but apparently, there's a bug in the EC2 API that prevents this from working - through no fault of CloudFormation. This is documented in this AWS forums thread, as well as the solution which I'll present here next.
5. Fix the AssignIpv6AddressOnCreation problem with another lambda:
This is the lambda setup:
IPv6WorkaroundRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
Policies:
- PolicyName: !Sub "ipv6-fix-logs-${AWS::StackName}"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
- PolicyName: !Sub "ipv6-fix-modify-${AWS::StackName}"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- ec2:ModifySubnetAttribute
Resource: "*"
IPv6WorkaroundLambda:
Type: AWS::Lambda::Function
Properties:
Handler: "index.lambda_handler"
Code: #import cfnresponse below required to send respose back to CFN
ZipFile:
Fn::Sub: |
import cfnresponse
import boto3
def lambda_handler(event, context):
if event['RequestType'] is 'Delete':
cfnresponse.send(event, context, cfnresponse.SUCCESS)
return
responseValue = event['ResourceProperties']['SubnetId']
ec2 = boto3.client('ec2', region_name='${AWS::Region}')
ec2.modify_subnet_attribute(AssignIpv6AddressOnCreation={
'Value': True
},
SubnetId=responseValue)
responseData = {}
responseData['SubnetId'] = responseValue
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "CustomResourcePhysicalID")
Runtime: python2.7
Role: !GetAtt IPv6WorkaroundRole.Arn
Timeout: 30
And this is how you use it:
IPv6WorkaroundSubnetA:
Type: Custom::SubnetModify
Properties:
ServiceToken: !GetAtt IPv6WorkaroundLambda.Arn
SubnetId: !Ref SubnetA
This call races with the autoscaling group to complete the setup, but it is very unlikely to lose - I ran this a few dozen times and it never had a problem to set the field correctly before the first instance boots.
I had a very similar issue and had a chat with AWS Support concerning this. The current state is that IPv6 support in CloudFormation is very limited.
We ended up creating Custom Resources for lots of IPv6-specific things. We have a Custom Resource that:
Enables IPv6-allocation on a subnet
Creates an Egress-Only Internet Gateway
Adds a route to the Egress-Only Internet Gateway (the built-in Route resource says it "fails to stabilize" when pointing to an EIGW)
The Custom Resources are just Lambda functions that do the "raw" API call, and a IAM Role that grants the Lambda enough permissions to do that API call.