How do I use a CloudFormation output value in a script with Ansible - ansible

I'm trying to set up some automation for a school project. The gist of it is:
Install an EC2 instance via CloudFormation. Then
Use cfn-init to
Install a very basic Ansible configuration
Download an Ansible playbook from S3
Run said playbook to install a Redshift cluster via CloudFormation
Install some necessary packages
Install some necessary Python modules
Download a Python script that will
Connect to the Redshift database
Create a table
Use the COPY command to import data into the table
It all works up to the point of executing the script. Doing so manually works a treat, but that is because I can copy the created Redshift endpoint into the script for the database connection. The issue I have is that I don't know how to extract that output value from CloudFormation so it can be inserted it into the script for a fully automated (save the initial EC2 deployment) solution.
I see that Ansible has at least one means of doing so (cloudformation_facts, for instance), but I'm a bit foggy on how to implement it. I've looked at examples but it hasn't become any clearer. Without context I'm lost and so far all I've seen are standalone snippets.

In order to ensure an answer is listed:
I figured out the describe-stacks and describe-stack-resources sub-commands to the aws cloudformation cli command. Using these, I was able to track down the information I needed. In particular, I needed to access a role. This is the command that I used:
aws cloudformation describe-stacks --stack-name=StackName --region=us-west-2 \
--query 'Stacks[0].Outputs[?OutputKey==`RedshiftClusterEndpointAddress`].OutputValue' \
--output text
I first used the describe-stacks subcommand to get a list of my stacks. The relevant stack is the first in the list (an array) so I used Stacks[0] at the top of my query for the describe-stack-recources subcommand. I then used Outputs since I am interested in a value from the CloudFormation output list. I know the name of the key (RedshiftClusterEndpointAddress), so I used that as the parameter. I then used OutputValue to return the value of RedshiftClusterEndpointAddress.

Related

Terraform stop command

I'm new to terraform. I wrote a terraform code to provision an instance to oracle cloud infrastructure and it works fine. Somehow the problem is there is no "stop" command in terraform cmd, they only have "destroy" command.
Is there any way to stop resources instead of destroying it?
if you are looking for a solution, there is no such option currently. You can request for a feature here
As a temporary workaround, you could toy with user_data and update and send a shutdown request using it. i.e. terraform apply
I suggest using Terraform for Provisioning (Apply) and Termination (Destroy). Stopping/Starting the instance through OCI CLI is simple and can be done easily through a simple shell script such as this.
This is less complex and easily maintainable for simpler requirements.
Instance.sh File
oci compute instance action --action $1 --instance-id ocid1.instance.oc1.iad.an.....7d3xamffeq
Start Command:
$ source Instance.sh start
Stop Command:
$ source Instance.sh stop
You also have an option of setting up Functions that can manage the lifecycle actions. There is already a public solution such as the one below in case you would like to pursue further.
https://github.com/AnykeyNL/OCI-AutoScale/blob/master/AutoScaleALL.py

Not able to run AWS CLI commands through windows script(.bat file)

I am trying to create a "aws_configure.bat" file which will run aws commands. I need to configure "aws_configure.bat" file as windows task. I created my script with below content.
aws configure set AWS_ACCESS_KEY_ID <mykey>
aws configure set aws_secret_access_key <myskey>
aws configure set region us-west-2
aws dynamodb list-tables
When I am trying to run this script then its printing the first line in cmd window. Can someone please suggest what is the problem here. Why my script is not able to run the aws cli commands. (I have installed aws cli in my system and when i am running these commands directly in cmd window, everything is working fine).
You should consider creating and configuring your AWS credentials outside of your batch file, then referencing the named profile from the batch file.
Run aws configure --profile myprofile, and provide the information required.
Then from your batch file, call aws dynamodb list-tables --profile myprofile.
To setup the prefered/default profile, set AWS_PROFILE=myprofile in system environment. With this method, you will not need to reference the profile in the batch file.

AWS Script bash on EC instance launch

I would like to automate these steps:
Unzip a zip package (is it possible loading this zip on S3 bucket and downloading it during script? If yes, how?)
Edit apache configuration files (port.conf, /etc/apache2/sites-available/example.com.conf)
Run apt-get commands.
I really do not know how to create a script file to be run on EC2 instance startup.
Could everybody help me, please?
Thank you really much
What you're looking at is User Data, that will give you the possibility to run your script when ec2 instance is launched
When you create your ec2 instance, in step 3 (configure instance details) go to the bottom of the script and click on "Advanced Details". From there you can enter your script.
If you're using a Amazon AMI, the CLI is built in and you can use it, make sure to have ec2 IAM role defined with necessary rights on your AWS resources.
Now in terms of your script, this is vague but roughly:
you would run aws s3 cp s3_file local_file to download a zip file from s3 on the instance, use unzip linux command to unzip the content
Edit your files using sed, cat or >, see this Q&A
run commands with apt-get
Note: you're running the user-data script as root, so you dont need sudo when running commands.

Managing mutliple Amazon EC2 instances with Puppet

I'm working with multiple instances (10 and more), and I want to configure them without accessing to each of them. Currently I look through Puppet and it seems is what I need. I've tried it for two instances and it's ok, but I installed puppet manually in both of instances, and also manually sent certificate from agent via puppet agent. Is there any way to install puppet automatically and send certificate for each node, not accessing them?
You can use scripts within UserData to autoconfigure your instance (see Running Commands on Your Linux Instance at Launch) by installing puppet, configuring it, and running it. Keep in mind that UserData is normally limited to 16kb and that data in there is stored base-64 encoded.
You can also build your own AMI with configuration scripts that run on boot, and then use that to download configuration from a central server, or read it out of userdata (e.g. curl http://169.254.169.254/latest/user-data | bash -s).
For example this is something we had in our Cloudformation template that installed a configuration service on our hosts.
"UserData": { "Fn::Base64" : { "Fn::Join" : [ "\n", [
"#!/bin/sh",
"curl -k -u username:password -f -s -o /etc/init.d/ec2 https://scriptserver.example.com/scripts/ec2",
"chmod a+x /etc/init.d/ec2",
"/etc/init.d/ec2 start"] ] } }
Ideally the 'scriptserver' is in the same VPC since the username and password aren't terribly secure (they're stored unencrypted on the machine, the script server, and in the Cloudformation and EC2 services).
The advantage of bootstrapping everything with userdata instead of building an AMI is flexibility. You can update your bootstrap scripts, generate new instances, and you're done. The disadvantages are speed since you'll have wait for everything to install and configure each time an instance launches (beware Cloudformation timeouts) and stability since if your script installs packages from a public repository (e.g. apt-get install mysql), the packages can be updated at any time, potentially introducing untested software into your environment. The workaround for the latter is to install software from locations you control.

Is there anyway to access values from the EC2 tags in cloud-init

I know I can access the tags via the metadata and cli tools, but is there anyway to access them whilst running cloud-init? Ideally I'd like to look for a tag called hostname and use it to set the machine host name.
Thanks
This command can be run from userdata (or any time, really) to access the instance ID from the metadata, and use it to pull the tag called "hostname". You could assign this to a variable, or use the output to directly set the hostname of the instance.
aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].Tags[?Key==`hostname`].Value' --instance-id `curl -s http://169.254.169.254/latest/meta-data/instance-id`
In more recent versions of cloud-init, metadata is available via jinja templates. EC2 also recently started exposing tags via the EC2 instance metadata store. Taken together, you can use this information to retrieve tags as part of your cloud-init.
Example:
cloud-init query -f "{{ds.meta_data.tags.instance}}"
As part of the user data:
## template: jinja
#cloud-config
final_message: |
My name is {{ds.meta_data.tags.instance.Name}}

Resources