Managing mutliple Amazon EC2 instances with Puppet - amazon-ec2

I'm working with multiple instances (10 and more), and I want to configure them without accessing to each of them. Currently I look through Puppet and it seems is what I need. I've tried it for two instances and it's ok, but I installed puppet manually in both of instances, and also manually sent certificate from agent via puppet agent. Is there any way to install puppet automatically and send certificate for each node, not accessing them?

You can use scripts within UserData to autoconfigure your instance (see Running Commands on Your Linux Instance at Launch) by installing puppet, configuring it, and running it. Keep in mind that UserData is normally limited to 16kb and that data in there is stored base-64 encoded.
You can also build your own AMI with configuration scripts that run on boot, and then use that to download configuration from a central server, or read it out of userdata (e.g. curl http://169.254.169.254/latest/user-data | bash -s).
For example this is something we had in our Cloudformation template that installed a configuration service on our hosts.
"UserData": { "Fn::Base64" : { "Fn::Join" : [ "\n", [
"#!/bin/sh",
"curl -k -u username:password -f -s -o /etc/init.d/ec2 https://scriptserver.example.com/scripts/ec2",
"chmod a+x /etc/init.d/ec2",
"/etc/init.d/ec2 start"] ] } }
Ideally the 'scriptserver' is in the same VPC since the username and password aren't terribly secure (they're stored unencrypted on the machine, the script server, and in the Cloudformation and EC2 services).
The advantage of bootstrapping everything with userdata instead of building an AMI is flexibility. You can update your bootstrap scripts, generate new instances, and you're done. The disadvantages are speed since you'll have wait for everything to install and configure each time an instance launches (beware Cloudformation timeouts) and stability since if your script installs packages from a public repository (e.g. apt-get install mysql), the packages can be updated at any time, potentially introducing untested software into your environment. The workaround for the latter is to install software from locations you control.

Related

GitLab Executor Error "You lack permissions to write to system configuration."

I have an AWS EC2 with Linux. I have installed GitLab runner on it, I've followed the https://docs.gitlab.com/runner/install/linux-repository.html for For RHEL/CentOS/Fedora. It is a Shell executor. I have registered it. And then installed Docker on EC2.
When I start pipeline though, I get this error:
Initialized empty Git repository in /home/gitlab-runner/builds/xxxx/0/xxxx/xxxx/.git/
Created fresh repository.
Checking out b25b6b1a as develop...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
$ amazon-linux-extras install docker
You lack permissions to write to system configuration. /etc/yum.repos.d/amzn2-extras.repo
Any hint?
Since you're using the "shell" executor, commands that you run within your pipeline are essentially acting like you've SSH'd into the box with the gitlab-runner user (assuming you didn't change the user that the runners uses by default). That user is not going to have the privileges to install packages by default, which is the error you're seeing here.
You'll want to add your gitlab-runner user to the sudoers file, then use sudo to run your command.
All that having been said, is there a compelling reason for you to not use the docker executor? The isolation between jobs is much more clean. If you had multiple jobs running on a shell executor at the same time and they both install packages, for example, one of the two jobs will fail due to not being able to get a package lock. You can work around this with a resource_group, but docker is much more elegant of a solution.

How do I use a CloudFormation output value in a script with Ansible

I'm trying to set up some automation for a school project. The gist of it is:
Install an EC2 instance via CloudFormation. Then
Use cfn-init to
Install a very basic Ansible configuration
Download an Ansible playbook from S3
Run said playbook to install a Redshift cluster via CloudFormation
Install some necessary packages
Install some necessary Python modules
Download a Python script that will
Connect to the Redshift database
Create a table
Use the COPY command to import data into the table
It all works up to the point of executing the script. Doing so manually works a treat, but that is because I can copy the created Redshift endpoint into the script for the database connection. The issue I have is that I don't know how to extract that output value from CloudFormation so it can be inserted it into the script for a fully automated (save the initial EC2 deployment) solution.
I see that Ansible has at least one means of doing so (cloudformation_facts, for instance), but I'm a bit foggy on how to implement it. I've looked at examples but it hasn't become any clearer. Without context I'm lost and so far all I've seen are standalone snippets.
In order to ensure an answer is listed:
I figured out the describe-stacks and describe-stack-resources sub-commands to the aws cloudformation cli command. Using these, I was able to track down the information I needed. In particular, I needed to access a role. This is the command that I used:
aws cloudformation describe-stacks --stack-name=StackName --region=us-west-2 \
--query 'Stacks[0].Outputs[?OutputKey==`RedshiftClusterEndpointAddress`].OutputValue' \
--output text
I first used the describe-stacks subcommand to get a list of my stacks. The relevant stack is the first in the list (an array) so I used Stacks[0] at the top of my query for the describe-stack-recources subcommand. I then used Outputs since I am interested in a value from the CloudFormation output list. I know the name of the key (RedshiftClusterEndpointAddress), so I used that as the parameter. I then used OutputValue to return the value of RedshiftClusterEndpointAddress.

How to deploy Django Fixtures to Amazon AWS

I have my app stored on GitHub. To deploy it to Amazon, I use their EB deploy command which takes my git repository and sends it up. It then runs the container commands to load my data.
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
The problem is that I don't want the fixtures in my git. Git should not contain this data since it's shared with other users. How can I get my AWS to load the fixtures some other way?
You can use the old school way: scp to the ec2 instance.
You can go to the EC2 console to see the real EC2 instance associated to your EB environment (I assume you only have one instance). Write down the public ip, and then connect to the instance like you would do with a normal EC2 instance.
For example
scp -i [YOUR_AWS_KEY] [MY_FIXTURE_FILE] ec2-user#[INSTANCE_IP]:[PATH_ON_SERVER]
Note that the username has to be ec2-user.
But I do not recommend this way to deploy the project because you may need to manually execute the commands. This is, however, useful for me to get the fixture from a live server.
To avoid tracking fixtures in the git. I just use a simple workaround: create a local branch for EB deployment and track the fixtures along with other environment-specific credentials. Such EB branches should never be uploaded to the git remote repositories.

AWS Script bash on EC instance launch

I would like to automate these steps:
Unzip a zip package (is it possible loading this zip on S3 bucket and downloading it during script? If yes, how?)
Edit apache configuration files (port.conf, /etc/apache2/sites-available/example.com.conf)
Run apt-get commands.
I really do not know how to create a script file to be run on EC2 instance startup.
Could everybody help me, please?
Thank you really much
What you're looking at is User Data, that will give you the possibility to run your script when ec2 instance is launched
When you create your ec2 instance, in step 3 (configure instance details) go to the bottom of the script and click on "Advanced Details". From there you can enter your script.
If you're using a Amazon AMI, the CLI is built in and you can use it, make sure to have ec2 IAM role defined with necessary rights on your AWS resources.
Now in terms of your script, this is vague but roughly:
you would run aws s3 cp s3_file local_file to download a zip file from s3 on the instance, use unzip linux command to unzip the content
Edit your files using sed, cat or >, see this Q&A
run commands with apt-get
Note: you're running the user-data script as root, so you dont need sudo when running commands.

capistrano is only run for servers matching

I am trying to write a capistrano task that will backup databases on multiple servers. The bash script that backs up the databases is on my local machine. However capistrano outputs this error message:
`backup' is only run for servers matching {}, but no servers matched
I am new to capistrano, is there some kind of setting I can set so that I can just run local commands.
Without a little more information, it's difficult to say exactly what the problem might be. It sounds to me like you're trying to run a bash script that's on your local computer on several remote servers. This is not something that Capistrano can do. It will run commands on remote servers, but only if the commands are present on those servers. If your bash script is something that needs to be run from the database server, you'll need to upload the script to those servers before running with with Capistrano. If, on the other hand, you're trying to run a script that connects to those servers itself, there's no reason to involve Capistrano. Running commands over an ssh connection is what it's designed for. If you could post the your Capfile, including the tasks you are trying to run, we might be able to give you more specific assistance.

Resources