I am starting to use CloudFormation for orchestration/provisioning and I see there are two ways to install packages:
First way is with a bash script in userdata section, example:
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"\n",
[
"#!/bin/bash",
"apt-get update",
"apt-get upgrade -y",
"apt-get install apache2 -y",
"echo \"<html><body><h1>Welcome</h1>\" > /var/www/index.html",
Another way is to use cfn-init:
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"yum update -y aws-cfn-bootstrap\n",
"# Install the files and packages from the metadata\n",
"/opt/aws/bin/cfn-init -v ",
" --stack ", { "Ref" : "AWS::StackName" },
" --resource WebServerInstance ",
" --configsets Install ",
" --region ", { "Ref" : "AWS::Region" }, "\n"
]]}}
Is there any reasons to use cfn-init and not bash in UserData?
Actually it makes no difference if you use bash in UserData or cfn-init.
But to use cnf-init you need to install the package aws-cfn-bootstrap as you have already done and also need to define
"Resources": {
"MyInstance": {
"Type": "AWS::EC2::Instance",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"packages" : {
:
},
"groups" : {
:
},
"users" : {
:
},
"sources" : {
:
},
"files" : {
:
},
"commands" : {
:
},
"services" : {
:
}
}
}
},
"Properties": {
:
}
}
}
cfn-init is like a clean way to get things done instead of you needing to do yum install in bash of UserData though you will get the same outcome from both the methods.
Related
I'm trying to add the Rename-Computer line into a CloudFormation script but it is not doing anything, I know that I have to use the UserData property inside the resource stage, I see some examples and also the AWS CloudFormation documentation, but I think I'm missing something, in the examples, they just invoke the PowerShell command (As I did below) and it works, but for me is not doing anything, can someone help me with this? If anyone has a better example that it is already working I will appreciate it.
"Resources" : {
"EC2InstanceOne":{
"Type":"AWS::EC2::Instance",
"DeletionPolicy" : "Retain",
"Properties":{
"InstanceType":{ "Ref" : "InstanceType" },
"SubnetId": { "Ref" : "MySubnetVM1" },
"SecurityGroupIds":[ { "Ref" : "SGUtilized" } ],
"SecurityGroupIds":[ { "Ref" : "SGUtilized2" } ],
"IamInstanceProfile" : { "Ref" : "RoleName" },
"KeyName": { "Ref" : "ServerKeyName" },
"ImageId":{ "Ref" : "AMIUtilized" },
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sda1",
"Ebs" : {
"VolumeType" : "standard",
"DeleteOnTermination" : "false",
"VolumeSize" : "50"
}
}
],
"UserData" : { "Fn::Base64" : { "Fn::Join" : [ "", [
"powershell.exe Rename-Computer -NewName TESTVM01",
"powershell.exe Restart-Computer"
]
]
}
}
}
}
}
Thanks, best regards.
I was able to fix it replacing the PowerShell part with the following parameters
"<script>\n",
"PowerShell -Command \"& {Rename-Computer -NewName testvm01}\" \n",
"PowerShell -Command \"& {Restart-Computer}\" \n",
"</script>"
I am trying to create ec2 instance, and I want to be able to create a file that will contain ec2 instance public DNS name although in the following code I am getting circular dependency error caused by line:
"server_name = \"",{ "Fn::GetAtt" : [ "ECServer", "PublicDnsName" ]},"\"\n","\n"
Is it possible to get the public DNS name in the instance section when I am trying to create ec2?
"ECServer": {
"Type": "AWS::EC2::Instance",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"configSets": {
"Install": ["ECServerConfig"]
},
"ECConfig": {
"files": {
"/tmp/test.txt" : {
"content": { "Fn::Join" : ["", [
"server_name = \"",{ "Fn::GetAtt" : [ "ECServer", "PublicDnsName" ]},"\"\n","\n"
]]},
"mode" : "000644",
"owner": "root",
"group": "root"
}
}
}
}
},
You can get public IP of the running EC2 instance by using simple curl command as-
1. SSH to that EC2 instance.
2. Execute following command-
curl http://169.254.169.254/latest/meta-data/public-ipv4
You can try using this, in CFT one cannot use the refer to the properties of itself using Fn::GetAtt function
"ECServer": {
"Type": "AWS::EC2::Instance",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"configSets": {
"Install": ["ECServerConfig"]
},
"ECConfig": {
"files": {
"/tmp/test.txt" : {
"mode" : "000644",
"owner": "root",
"group": "root"
}
},
"commands" :{
"test" : {
"command" : "curl -s http://169.254.169.254/latest/meta-data/public-hostname > /tmp/test.txt",
"ignoreErrors" : "false"
}
}
}
}
}
}
I have an AWS Cloudformation template that, among other things, creates a public EC2 instance with some Metadata in the form of AWS::Cloudformation::Init configsets. When run, these configsets are meant to 1) install chef-solo, 2)create an AWS credentials file in /home/ec2-user/.aws/credentials, 3) use the credentials in 2) with the aws cli to retrieve a Chef cookbook from AWS S3, 4) run the cookbook.
Everything is working fine until 3). This breaks and, according to the cfn-init logs, the problem is that the credentials for the aws cli can't be found. However, step 2) completes successfully and when I log into the server manually I can see the credentials file in the right place and successfully run aws s3 commands from the prompt (the same ones that were supposed to be run automatically as part of the template.
Here is the error from the logs:
2016-05-04 04:03:14,950 P2482 [INFO] Command 2_fetch-cookbook
2016-05-04 04:03:15,977 P2482 [INFO] -----------------------Command Output-----------------------
2016-05-04 04:03:15,977 P2482 [INFO] Unable to locate credentials
2016-05-04 04:03:15,977 P2482 [INFO] Completed 1 part(s) with ... file(s) remaining
...and here is what it looks like when I log in:
ec2-user#ip-10-0-1-243 ~]$ ls .aws
credentials
[ec2-user#ip-10-0-1-243 ~]$ aws s3 ls s3://my-bucket
2016-05-04 00:27:43 41472 kitchen.tar.gz
I've been fiddling with this for quite a while and just can't seem to get it, so I'm hoping someone here might be able to help. =) Below you can find the relevant code for the EC2 instance. Note that I have to use sudo su before installing chef-solo since that script downloads and unpacks an rpm. Then I switch back to the ec2-user for everything else.
"EC2Instance" : {
"Type" : "AWS::EC2::Instance",
"Description" : "EC2 Instance",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"configSets" : {
"Setup" : [ "InstallChef", "SetAWSCreds", "Cook" ]
},
"InstallChef" : {
"commands" : {
"1_sudo" : {
"command" : "sudo su"
},
"2_install-chef" : {
"cw": "/home/ec2-user",
"command" : "curl -L https://www.opscode.com/chef/install.sh | bash"
},
"3_su" : {
"command" : "su ec2-user"
}
}
},
"SetAWSCreds" : {
"files" : {
"/home/ec2-user/.aws/credentials" : {
"content" : { "Fn::Join" : [ "", [
"[default]\n",
"aws_access_key_id = ",
{ "Ref" : "AwsAccessKeyId" },
"\n",
"aws_secret_access_key = ",
{ "Ref" : "AwsSecretAccessKey" },
"\n"
]]},
"owner" : "ec2-user",
"group" : "ec2-user"
}
}
},
"Cook" : {
"commands" : {
"1_ensure_ec2-user" : {
"command" : "su ec2-user"
},
"2_fetch-cookbook" : {
"cw" : "/home/ec2-user",
"command" : "aws s3 cp s3://my-bucket/kitchen.tar.gz ."
},
"3_unzip-cookbook" : {
"cw" : "/home/ec2-user",
"command" : "tar xvf kitchen.tar.gz"
},
"4_cook" : {
"cw" : "/home/ec2-user/kitchen",
"command" : "sudo chef-solo -c solo.rb -j web.json"
}
}
}
}
},
"Properties" : {
"ImageId" : "ami-08111162",
"KeyName" : { "Ref" : "KeyPairName" },
"InstanceType" : "t2.micro",
"SubnetId" : { "Ref" : "PublicSubnet" },
"SecurityGroupIds" : [ { "Ref" : "SecurityGroup" } ],
"UserData" : {
"Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"/opt/aws/bin/cfn-init -s ", { "Ref" : "AWS::StackName" },
" -r EC2Instance",
" -c Setup"
]]
}
}
}
I created autoscaling group which launch EC2 which has ELB. My question is how to provision those EC2 instances with ansible?
Before I used CNAME, but now I cant get instance dns. Please correct me if I wrong.
Should I use dynamic inventory or are there any other options?
My cloud formation template below:
```
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Template create autoscaling group",
"Parameters": {
"devKeyPair": {
"Description": "Name of an existing EC2 KeyPair to enable SSH access to the instances",
"Type": "AWS::EC2::KeyPair::KeyName",
"Default" : "dev-key"
}
},
"Resources" : {
"LaunchConfig" : {
"Type" : "AWS::AutoScaling::LaunchConfiguration",
"Properties" : {
"KeyName" : { "Ref": "devKeyPair" },
"ImageId" : "ami-1effc703",
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n", "\n", " echo 'Installing Git'\n"," yum --nogpgcheck -y install wget\n""] ]}},
"InstanceType" : "t2.small",
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sda1",
"Ebs" : {
"VolumeSize" : "10",
"VolumeType" : "gp2",
"DeleteOnTermination" : "true"
}
}
]
}
},
"BackendGroup" : {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties" : {
"AvailabilityZones" : ["eu-central-1a"],
"MinSize" : "1",
"MaxSize" : "1",
"LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
"LoadBalancerNames" : [ { "Ref" : "ElasticLoadBalancer" } ],
"Tags": [
{
"ResourceType": "auto-scaling-group",
"ResourceId": "bas-auto",
"Value": "bas-dev",
"Key": "Name",
"PropagateAtLaunch" : "true"
}
]
}
},
"ElasticLoadBalancer": {
"Type": "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties": {
"AvailabilityZones": ["eu-central-1a"],
"Listeners": [ {
"LoadBalancerPort": "80",
"InstancePort": "80",
"Protocol": "HTTP"
} ]
}
},
"BackendDNS" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com.",
"Comment" : "Targered to Bas instance",
"RecordSets" : [{
"Name" : "bas-dev.example.com.",
"Type" : "CNAME",
"TTL" : "300",
"ResourceRecords" : [
{
"Fn::GetAtt": [ "ElasticLoadBalancer", "DNSName" ]
}
]
}]
}
},
}
}
```
Another solution would be to provision your VM before starting the new instance. I.e. make sure that the image you're starting the ASG instances from is already provisioned.
One way to do this is to use something like packer.io to create a new AMI using Ansible as your provisioner. Then you can simply pass this new AMI ID into the ImageId attribute of the LaunchConfiguration.
Another approach could involve using the User Data to "phone home" and tell you the public IP address the instance has acquired.
The best solution for me was install Ansible Tower with a free license, the use user_data: properties ansible have an example here. https://www.ansible.com/blog/autoscaling-infrastructures
But is necessary build a first base image because if you NOT do this extend all provisioning time delay.
You can use Opswork with Cloudformation in order to run Ansible whenever a new instance is added to the Autoscaling group.
Though Opswork uses Chef but you can use this custom cookbook https://github.com/deepakagg/ansible-opsworks which will run desired playbook.
I'm building a stack that needs access to a private S3 bucket to download the most current version of my application. I'm using IAM roles, a relatively new AWS feature that allows EC2 instances to be assigned specific roles, which are then coupled with IAM policies. Unfortunately, these roles come with temporary API credentials generated at instantiation. It's not crippling, but it's forced me to do things like this cloud-init script (simplified to just the relevant bit):
#!/bin/sh
# Grab our credentials from the meta-data and parse the response
CREDENTIALS=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access)
S3_ACCESS_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['AccessKeyId'];")
S3_SECRET_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['SecretAccessKey'];")
S3_TOKEN=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['Token'];")
# Create an executable script to pull the file
cat << EOF > /tmp/pullS3.rb
require 'rubygems'
require 'aws-sdk'
AWS.config(
:access_key_id => "$S3_ACCESS_KEY",
:secret_access_key => "$S3_SECRET_KEY",
:session_token => "$S3_TOKEN")
s3 = AWS::S3.new()
myfile = s3.buckets['mybucket'].objects["path/to/my/file"]
File.open("/path/to/save/myfile", "w") do |f|
f.write(myfile.read)
end
EOF
# Downloading the file
ruby /tmp/pullS3.rb
First and foremost: This works, and works pretty well. All the same, I'd love to use CloudFormation's existing support for source access. Specifically, cfn-init supports the use of authentication resources to get at protected data, including S3 buckets. Is there anyway to get at these keys from within cfn-init, or perhaps tie the IAM role to an authentication resource?
I suppose one alternative would be putting my source behind some other authenticated service, but that's not a viable option at this time.
Another promising lead is the AWS::IAM::AccessKey resource, but the docs don't suggest it can be used with roles. I'm going to try it anyway.
I'm not sure when support was added, but you can meanwhile use an IAM role for authenticating S3 downloads for files and sources sections in AWS::CloudFormation::Init.
Just use roleName instead of accessKeyId & secretKey (see AWS::CloudFormation::Authentication for details), e.g.:
"Metadata": {
"AWS::CloudFormation::Init": {
"download": {
"files": {
"/tmp/test.txt": {
"source": "http://myBucket.s3.amazonaws.com/test.txt"
}
}
}
},
"AWS::CloudFormation::Authentication": {
"default" : {
"type": "s3",
"buckets": [ "myBucket" ],
"roleName": { "Ref": "myRole" }
}
}
}
Tested with aws-cfn-bootstrap-1.3-11
I managed to get this working. What I used was code from this exchange:
https://forums.aws.amazon.com/message.jspa?messageID=319465
The code doesn't use IAM Policies - it uses the AWS::S3::BucketPolicy instead.
Cloud formation code snippet:
"Resources" : {
"CfnUser" : {
"Type" : "AWS::IAM::User",
"Properties" : {
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": { "Statement":[{
"Effect" : "Allow",
"Action" : [
"cloudformation:DescribeStackResource",
"s3:GetObject"
],
"Resource" :"*"
}]}
}]
}
},
"CfnKeys" : {
"Type" : "AWS::IAM::AccessKey",
"Properties" : {
"UserName" : {"Ref": "CfnUser"}
}
},
"BucketPolicy" : {
"Type" : "AWS::S3::BucketPolicy",
"Properties" : {
"PolicyDocument": {
"Version" : "2008-10-17",
"Id" : "CfAccessPolicy",
"Statement" : [{
"Sid" : "ReadAccess",
"Action" : ["s3:GetObject"],
"Effect" : "Allow",
"Resource" : { "Fn::Join" : ["", ["arn:aws:s3:::<MY_BUCKET>/*"]]},
"Principal" : { "AWS": {"Fn::GetAtt" : ["CfnUser", "Arn"]} }
}]
},
"Bucket" : "<MY_BUCKET>"
}
},
"WebServer": {
"Type": "AWS::EC2::Instance",
"DependsOn" : "BucketPolicy",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"sources" : {
"/etc/<MY_PATH>" : "https://s3.amazonaws.com/<MY_BUCKET>/<MY_FILE>"
}
}
},
"AWS::CloudFormation::Authentication" : {
"S3AccessCreds" : {
"type" : "S3",
"accessKeyId" : { "Ref" : "CfnKeys" },
"secretKey" : {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]},
"buckets" : [ "<MY_BUCKET>" ]
}
}
},
"Properties": {
"ImageId" : "<MY_INSTANCE_ID>",
"InstanceType" : { "Ref" : "WebServerInstanceType" },
"KeyName" : {"Ref": "KeyName"},
"SecurityGroups" : [ "<MY_SECURITY_GROUP>" ],
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"# Helper function\n",
"function error_exit\n",
"{\n",
" cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WaitHandle" }, "'\n",
" exit 1\n",
"}\n",
"# Install Webserver Packages etc \n",
"cfn-init -v --region ", { "Ref" : "AWS::Region" },
" -s ", { "Ref" : "AWS::StackName" }, " -r WebServer ",
" --access-key ", { "Ref" : "CfnKeys" },
" --secret-key ", {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]}, " || error_exit 'Failed to run cfn-init'\n",
"# All is well so signal success\n",
"cfn-signal -e 0 -r \"Setup complete\" '", { "Ref" : "WaitHandle" }, "'\n"
]]}}
}
}
Obviously replacing MY_BUCKET, MY_FILE, MY_INSTANCE_ID, MY_SECURITY_GROUP with your values.