I am attempting to provision a few EC2 instances that need multiple EBS drives. I am attempting to define the root volume and 4 other volumes through BlockDeviceMappings.
Problem:
As far as I can tell, the code below conforms to any example I have seen online. But when Windows boots up, it dies instantly. And looking at the EC2 console (screenshot), I can see that the instance has seven EBS volumes attached (instead of 5) and that /dev/xda is set to root instead of /dev/sda1.
"Mappings" : {
"AWSRegionToAMI" : {
"us-east-1" : { "Windows2012R2" : "ami-5d1b984a" },
"us-west-1" : { "Windows2012R2" : "ami-07713767" },
"us-west-2" : { "Windows2012R2" : "ami-241bd844" }
},
"VolumeSize" : {
"DataDrive" : { "Size" : "50" },
"LogDrive" : { "Size" : "50" },
"TempDrive" : { "Size" : "400" },
"BackupDrive" : { "Size" : "100" }
},
"stackmap" : {
"sqlha" : {
"Name": "MS SQL Server 2014 Always On",
"chefjson" : "https://s3.amazonaws.com/[redacted]",
"os" : "win",
"bootstrapurl" : "https://s3.amazonaws.com/[redacted]"
}
}
},
"WSFCNode1": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId" : { "Fn::FindInMap" : [ "AWSRegionToAMI", { "Ref" : "AWS::Region" }, "Windows2012R2" ] },
"InstanceType": { "Ref": "InstanceType" },
"EbsOptimized": "true",
"NetworkInterfaces": [
{
"DeleteOnTermination": "true",
"DeviceIndex": 0,
"SubnetId": { "Ref": "PrivateSubnet1ID" },
"SecondaryPrivateIpAddressCount": 2,
"GroupSet": [
{ "Ref": "WSFCSecurityGroup" },
{ "Ref": "WSFCClientSecurityGroup" }
]
}
],
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs" : {"VolumeSize": "60"}
},
{
"DeviceName": "/dev/xvdb",
"Ebs" : {"VolumeSize": { "Fn::FindInMap" : [ "VolumeSize", "DataDrive", "Size" ]} }
},
{
"DeviceName": "/dev/xvdc",
"Ebs" : {"VolumeSize": { "Fn::FindInMap" : [ "VolumeSize", "LogDrive", "Size" ]} }
},
{
"DeviceName": "/dev/xvdd",
"Ebs" : {"VolumeSize": { "Fn::FindInMap" : [ "VolumeSize", "TempDrive", "Size" ]} }
},
{
"DeviceName": "/dev/xvde",
"Ebs" : {"VolumeSize": { "Fn::FindInMap" : [ "VolumeSize", "BackupDrive", "Size" ]} }
}
],
"KeyName": { "Ref": "KeyPairName" },
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"<powershell>\n",
"# Disable UAC and allow scripts to run\n",
"New-ItemProperty -Path HKLM:Software\\Microsoft\\Windows\\CurrentVersion\\policies\\system -Name EnableLUA -PropertyType DWord -Value 0 -Force\n",
"Set-ExecutionPolicy Unrestricted -force\n",
"c:\\windows\\System32\\WindowsPowershell\\v1.0\\powershell.exe -noninteractive -noprofile Set-ExecutionPolicy unrestricted -force\n",
"c:\\windows\\syswow64\\windowspowershell\\v1.0\\powershell.exe -noninteractive -noprofile Set-ExecutionPolicy unrestricted -force\n",
"#Change TimeZone\n",
"tzutil /s ", {"Ref" : "Timezone"}, "\n",
"#Run Bootstrap PS1\n",
"$newname = '", { "Fn::Join" : ["", [{"Ref" : "Environment"}, {"Ref" : "Location"}, {"Ref" : "Stack"}, {"Ref" : "Role"} ]]},"'\n",
"$region = '", {"Ref" : "VPCRegion"}, "'\n",
"$role = '", {"Ref" : "Role"}, "'\n",
"$chef_rb = '", { "Fn::FindInMap" : [ "stackmap", { "Ref" : "Role" }, "chefjson"]}, "'\n",
"mkdir 'c:\\temp' -force\n",
"(new-object System.Net.WebClient).DownloadFile( 'https://s3.amazonaws.com/[redacted]', 'c:\\temp\\bootstrap.ps1')\n",
"powershell c:\\temp\\bootstrap.ps1 -newname $newname -region $region -role $role -chef_rb $chef_rb -logfile c:\\temp\\bootstrap.log -verbose true\n",
"#Reboot if needed\n",
"Start-Sleep -s 10\n",
"Restart-Computer\n",
"mkdir 'c:\\temp\\cf_reboot_cmd_ran' -force\n",
"shutdown -r\n",
"mkdir 'c:\\temp\\cf_shut_cmd_ran' -force\n",
"Start-Sleep -s 10\n",
"mkdir 'c:\\temp\\cf_ran_again' -force\n",
"</powershell>"
] ] }
},
"Tags": [
{ "Key": "Name", "Value": "SQL Node 1" }
]
}
},
Confusingly, even when I drop all the extra drives and just do a block device mapping of one disk
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs" : {"VolumeSize": "60"}
}
],
...I still end up with three volumes, and the wrong one (/dev/xda) assigned as root. Screenshot.
Is this a Windows thing? What do my block device mappings need to look like to mount correctly as root (or C:, in this case)?
Nevermind. Root problem was in the AMI I chose. Once I selected a proper Windows AMI, everything worked perfectly.
For anybody else running into this problem, double check your AMI selection.
Related
Here is my JSON data that i am trying to send from filebeat to ingest pipeline "logpipeline.json" in opensearch.
json data
{
"#timestamp":"2022-11-08T10:07:05+00:00",
"client":"10.x.x.x",
"server_name":"example.stack.com",
"server_port":"80",
"server_protocol":"HTTP/1.1",
"method":"POST",
"request":"/example/api/v1/",
"request_length":"200",
"status":"500",
"bytes_sent":"598",
"body_bytes_sent":"138",
"referer":"",
"user_agent":"Java/1.8.0_191",
"upstream_addr":"10.x.x.x:10376",
"upstream_status":"500",
"gzip_ratio":"",
"content_type":"application/json",
"request_time":"6.826",
"upstream_response_time":"6.826",
"upstream_connect_time":"0.000",
"upstream_header_time":"6.826",
"remote_addr":"10.x.x.x",
"x_forwarded_for":"10.x.x.x",
"upstream_cache_status":"",
"ssl_protocol":"TLSv",
"ssl_cipher":"xxxx",
"ssl_session_reused":"r",
"request_body":"{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}",
"response_body":"{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}",
"limit_req_status":"",
"log_body":"1",
"connection_upgrade":"close",
"http_upgrade":"",
"request_uri":"/example/api/v1/",
"args":""
}
Filebeat to Opensearch log shipping
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.29.117:9200"]
pipeline: logpipeline
#index: "filebeatelastic-%{[agent.version]}-%{+yyyy.MM.dd}"
index: "nginx_dev-%{+yyyy.MM.dd}"
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.enabled: true
ssl.verification_mode: none
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "filebeat"
password: "filebeat"
I am carrying out the "data" fields transformation in the ingest pipeline for some of the fields by doing type conversion which works perfectly. But the only problem i am facing is with the "#timestamp".
The "#timestamp" is of "date" type and once the json data goes through the pipeline i am mapping the json data message to root level json object called "data". In that transformed data the "data.#timestamp" is showing as type "string" even though i haven't done any transformation for it.
Opensearch ingestpipeline - logpipeline.json
{
"description" : "Logging Pipeline",
"processors" : [
{
"json" : {
"field" : "message",
"target_field" : "data"
}
},
{
"date" : {
"field" : "data.#timestamp",
"formats" : ["ISO8601"]
}
},
{
"convert" : {
"field" : "data.body_bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_length",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_connect_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_header_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_response_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
}
]
}
Is there any way i can preserve the "#timestamp" "date" type field even after the transformation carried out in ingest pipeline?
indexed document image:
Edit1: Update ingest pipeline simulate result
{
"docs" : [
{
"doc" : {
"_index" : "_index",
"_id" : "_id",
"_source" : {
"index_date" : "2022.11.08",
"#timestamp" : "2022-11-08T12:07:05.000+02:00",
"message" : """
{ "#timestamp": "2022-11-08T10:07:05+00:00", "client": "10.x.x.x", "server_name": "example.stack.com", "server_port": "80", "server_protocol": "HTTP/1.1", "method": "POST", "request": "/example/api/v1/", "request_length": "200", "status": "500", "bytes_sent": "598", "body_bytes_sent": "138", "referer": "", "user_agent": "Java/1.8.0_191", "upstream_addr": "10.x.x.x:10376", "upstream_status": "500", "gzip_ratio": "", "content_type": "application/json", "request_time": "6.826", "upstream_response_time": "6.826", "upstream_connect_time": "0.000", "upstream_header_time": "6.826", "remote_addr": "10.x.x.x", "x_forwarded_for": "10.x.x.x", "upstream_cache_status": "", "ssl_protocol": "TLSv", "ssl_cipher": "xxxx", "ssl_session_reused": "r", "request_body": "{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}", "response_body": "{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}", "limit_req_status": "", "log_body": "1", "connection_upgrade": "close", "http_upgrade": "", "request_uri": "/example/api/v1/", "args": ""}
""",
"data" : {
"server_name" : "example.stack.com",
"request" : "/example/api/v1/",
"referer" : "",
"log_body" : "1",
"upstream_addr" : "10.x.x.x:10376",
"body_bytes_sent" : 138,
"upstream_header_time" : 6.826,
"ssl_cipher" : "xxxx",
"response_body" : """{"statusCode":500,"reasonPhrase":"Internal Server Error","errorMessage":"xxxx"}""",
"upstream_status" : "500",
"request_time" : 6.826,
"upstream_cache_status" : "",
"content_type" : "application/json",
"client" : "10.x.x.x",
"user_agent" : "Java/1.8.0_191",
"ssl_protocol" : "TLSv",
"limit_req_status" : "",
"remote_addr" : "10.x.x.x",
"method" : "POST",
"gzip_ratio" : "",
"http_upgrade" : "",
"bytes_sent" : 598,
"request_uri" : "/example/api/v1/",
"x_forwarded_for" : "10.x.x.x",
"args" : "",
"#timestamp" : "2022-11-08T10:07:05+00:00",
"upstream_connect_time" : 0.0,
"request_body" : """{"date":null,"sourceType":"BPM","processId":"xxxxx","comment":"Process status: xxxxx: ","user":"xxxx"}""",
"request_length" : 200,
"ssl_session_reused" : "r",
"server_port" : "80",
"upstream_response_time" : 6.826,
"connection_upgrade" : "close",
"server_protocol" : "HTTP/1.1",
"status" : "500"
}
},
"_ingest" : {
"timestamp" : "2023-01-18T08:06:35.335066236Z"
}
}
}
]
}
Finally able to resolve my issue. I updated the filebeat.yml with the following. Previously template name and pattern was different. But this default template name "filebeat" and pattern "filebeat" seems to be doing the job for me.
To
setup.template.name: "filebeat"
setup.template.pattern: "filebeat"
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
But still need to figure our how templates work though
I'm trying to add the Rename-Computer line into a CloudFormation script but it is not doing anything, I know that I have to use the UserData property inside the resource stage, I see some examples and also the AWS CloudFormation documentation, but I think I'm missing something, in the examples, they just invoke the PowerShell command (As I did below) and it works, but for me is not doing anything, can someone help me with this? If anyone has a better example that it is already working I will appreciate it.
"Resources" : {
"EC2InstanceOne":{
"Type":"AWS::EC2::Instance",
"DeletionPolicy" : "Retain",
"Properties":{
"InstanceType":{ "Ref" : "InstanceType" },
"SubnetId": { "Ref" : "MySubnetVM1" },
"SecurityGroupIds":[ { "Ref" : "SGUtilized" } ],
"SecurityGroupIds":[ { "Ref" : "SGUtilized2" } ],
"IamInstanceProfile" : { "Ref" : "RoleName" },
"KeyName": { "Ref" : "ServerKeyName" },
"ImageId":{ "Ref" : "AMIUtilized" },
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sda1",
"Ebs" : {
"VolumeType" : "standard",
"DeleteOnTermination" : "false",
"VolumeSize" : "50"
}
}
],
"UserData" : { "Fn::Base64" : { "Fn::Join" : [ "", [
"powershell.exe Rename-Computer -NewName TESTVM01",
"powershell.exe Restart-Computer"
]
]
}
}
}
}
}
Thanks, best regards.
I was able to fix it replacing the PowerShell part with the following parameters
"<script>\n",
"PowerShell -Command \"& {Rename-Computer -NewName testvm01}\" \n",
"PowerShell -Command \"& {Restart-Computer}\" \n",
"</script>"
I am not able to set LBCookieStickinessPolicy for ELB using the cloudformation script.
"LBCookieStickinessPolicy": [
{
"PolicyName": "Sample",
"CookieExpirationPeriod": "180"
}
]
You need to associate this policy with a listener. Include the policy name in the listener's PolicyNames property.
"LBCookieStickinessPolicy" : [{
"PolicyName" : "Sample",
"CookieExpirationPeriod" : "180"
} ],
"Listeners" : [ {
"LoadBalancerPort" : "80",
"InstancePort" : { "Ref" : "InstancePort" },
"Protocol" : "HTTP",
"PolicyNames" : [ "Sample" ]
} ],
I created autoscaling group which launch EC2 which has ELB. My question is how to provision those EC2 instances with ansible?
Before I used CNAME, but now I cant get instance dns. Please correct me if I wrong.
Should I use dynamic inventory or are there any other options?
My cloud formation template below:
```
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Template create autoscaling group",
"Parameters": {
"devKeyPair": {
"Description": "Name of an existing EC2 KeyPair to enable SSH access to the instances",
"Type": "AWS::EC2::KeyPair::KeyName",
"Default" : "dev-key"
}
},
"Resources" : {
"LaunchConfig" : {
"Type" : "AWS::AutoScaling::LaunchConfiguration",
"Properties" : {
"KeyName" : { "Ref": "devKeyPair" },
"ImageId" : "ami-1effc703",
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n", "\n", " echo 'Installing Git'\n"," yum --nogpgcheck -y install wget\n""] ]}},
"InstanceType" : "t2.small",
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sda1",
"Ebs" : {
"VolumeSize" : "10",
"VolumeType" : "gp2",
"DeleteOnTermination" : "true"
}
}
]
}
},
"BackendGroup" : {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties" : {
"AvailabilityZones" : ["eu-central-1a"],
"MinSize" : "1",
"MaxSize" : "1",
"LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
"LoadBalancerNames" : [ { "Ref" : "ElasticLoadBalancer" } ],
"Tags": [
{
"ResourceType": "auto-scaling-group",
"ResourceId": "bas-auto",
"Value": "bas-dev",
"Key": "Name",
"PropagateAtLaunch" : "true"
}
]
}
},
"ElasticLoadBalancer": {
"Type": "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties": {
"AvailabilityZones": ["eu-central-1a"],
"Listeners": [ {
"LoadBalancerPort": "80",
"InstancePort": "80",
"Protocol": "HTTP"
} ]
}
},
"BackendDNS" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com.",
"Comment" : "Targered to Bas instance",
"RecordSets" : [{
"Name" : "bas-dev.example.com.",
"Type" : "CNAME",
"TTL" : "300",
"ResourceRecords" : [
{
"Fn::GetAtt": [ "ElasticLoadBalancer", "DNSName" ]
}
]
}]
}
},
}
}
```
Another solution would be to provision your VM before starting the new instance. I.e. make sure that the image you're starting the ASG instances from is already provisioned.
One way to do this is to use something like packer.io to create a new AMI using Ansible as your provisioner. Then you can simply pass this new AMI ID into the ImageId attribute of the LaunchConfiguration.
Another approach could involve using the User Data to "phone home" and tell you the public IP address the instance has acquired.
The best solution for me was install Ansible Tower with a free license, the use user_data: properties ansible have an example here. https://www.ansible.com/blog/autoscaling-infrastructures
But is necessary build a first base image because if you NOT do this extend all provisioning time delay.
You can use Opswork with Cloudformation in order to run Ansible whenever a new instance is added to the Autoscaling group.
Though Opswork uses Chef but you can use this custom cookbook https://github.com/deepakagg/ansible-opsworks which will run desired playbook.
I'm building a stack that needs access to a private S3 bucket to download the most current version of my application. I'm using IAM roles, a relatively new AWS feature that allows EC2 instances to be assigned specific roles, which are then coupled with IAM policies. Unfortunately, these roles come with temporary API credentials generated at instantiation. It's not crippling, but it's forced me to do things like this cloud-init script (simplified to just the relevant bit):
#!/bin/sh
# Grab our credentials from the meta-data and parse the response
CREDENTIALS=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access)
S3_ACCESS_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['AccessKeyId'];")
S3_SECRET_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['SecretAccessKey'];")
S3_TOKEN=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['Token'];")
# Create an executable script to pull the file
cat << EOF > /tmp/pullS3.rb
require 'rubygems'
require 'aws-sdk'
AWS.config(
:access_key_id => "$S3_ACCESS_KEY",
:secret_access_key => "$S3_SECRET_KEY",
:session_token => "$S3_TOKEN")
s3 = AWS::S3.new()
myfile = s3.buckets['mybucket'].objects["path/to/my/file"]
File.open("/path/to/save/myfile", "w") do |f|
f.write(myfile.read)
end
EOF
# Downloading the file
ruby /tmp/pullS3.rb
First and foremost: This works, and works pretty well. All the same, I'd love to use CloudFormation's existing support for source access. Specifically, cfn-init supports the use of authentication resources to get at protected data, including S3 buckets. Is there anyway to get at these keys from within cfn-init, or perhaps tie the IAM role to an authentication resource?
I suppose one alternative would be putting my source behind some other authenticated service, but that's not a viable option at this time.
Another promising lead is the AWS::IAM::AccessKey resource, but the docs don't suggest it can be used with roles. I'm going to try it anyway.
I'm not sure when support was added, but you can meanwhile use an IAM role for authenticating S3 downloads for files and sources sections in AWS::CloudFormation::Init.
Just use roleName instead of accessKeyId & secretKey (see AWS::CloudFormation::Authentication for details), e.g.:
"Metadata": {
"AWS::CloudFormation::Init": {
"download": {
"files": {
"/tmp/test.txt": {
"source": "http://myBucket.s3.amazonaws.com/test.txt"
}
}
}
},
"AWS::CloudFormation::Authentication": {
"default" : {
"type": "s3",
"buckets": [ "myBucket" ],
"roleName": { "Ref": "myRole" }
}
}
}
Tested with aws-cfn-bootstrap-1.3-11
I managed to get this working. What I used was code from this exchange:
https://forums.aws.amazon.com/message.jspa?messageID=319465
The code doesn't use IAM Policies - it uses the AWS::S3::BucketPolicy instead.
Cloud formation code snippet:
"Resources" : {
"CfnUser" : {
"Type" : "AWS::IAM::User",
"Properties" : {
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": { "Statement":[{
"Effect" : "Allow",
"Action" : [
"cloudformation:DescribeStackResource",
"s3:GetObject"
],
"Resource" :"*"
}]}
}]
}
},
"CfnKeys" : {
"Type" : "AWS::IAM::AccessKey",
"Properties" : {
"UserName" : {"Ref": "CfnUser"}
}
},
"BucketPolicy" : {
"Type" : "AWS::S3::BucketPolicy",
"Properties" : {
"PolicyDocument": {
"Version" : "2008-10-17",
"Id" : "CfAccessPolicy",
"Statement" : [{
"Sid" : "ReadAccess",
"Action" : ["s3:GetObject"],
"Effect" : "Allow",
"Resource" : { "Fn::Join" : ["", ["arn:aws:s3:::<MY_BUCKET>/*"]]},
"Principal" : { "AWS": {"Fn::GetAtt" : ["CfnUser", "Arn"]} }
}]
},
"Bucket" : "<MY_BUCKET>"
}
},
"WebServer": {
"Type": "AWS::EC2::Instance",
"DependsOn" : "BucketPolicy",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"sources" : {
"/etc/<MY_PATH>" : "https://s3.amazonaws.com/<MY_BUCKET>/<MY_FILE>"
}
}
},
"AWS::CloudFormation::Authentication" : {
"S3AccessCreds" : {
"type" : "S3",
"accessKeyId" : { "Ref" : "CfnKeys" },
"secretKey" : {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]},
"buckets" : [ "<MY_BUCKET>" ]
}
}
},
"Properties": {
"ImageId" : "<MY_INSTANCE_ID>",
"InstanceType" : { "Ref" : "WebServerInstanceType" },
"KeyName" : {"Ref": "KeyName"},
"SecurityGroups" : [ "<MY_SECURITY_GROUP>" ],
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"# Helper function\n",
"function error_exit\n",
"{\n",
" cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WaitHandle" }, "'\n",
" exit 1\n",
"}\n",
"# Install Webserver Packages etc \n",
"cfn-init -v --region ", { "Ref" : "AWS::Region" },
" -s ", { "Ref" : "AWS::StackName" }, " -r WebServer ",
" --access-key ", { "Ref" : "CfnKeys" },
" --secret-key ", {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]}, " || error_exit 'Failed to run cfn-init'\n",
"# All is well so signal success\n",
"cfn-signal -e 0 -r \"Setup complete\" '", { "Ref" : "WaitHandle" }, "'\n"
]]}}
}
}
Obviously replacing MY_BUCKET, MY_FILE, MY_INSTANCE_ID, MY_SECURITY_GROUP with your values.