Passing S3cmd commands As User Data To Ec2 - shell

i am having one AWS EC2 instance. From this EC2 instance i am creating slave EC2 instances.
And while creating slave instances i am passing user data to new slave instance.In that user data i have written code for creating new directory in EC2 instance and downloading file from S3 bucket.
but problem is that, script creates new directory on EC2 instance but it Fails to download file from S3 bucket.
User Data Script :-
#! /bin/bash
cd /home
mkdir pravin
s3cmd get s3://bucket/usr.sh >> download.log
As shown above,in this code mkdir pravin create directory but s3cmd get s3://bucket/usr.sh fails to download file and download.log file also gets created but it remains empty.
How can i solve this proble, ? (AMI used for this is preconfigured with s3cmd)

Are you by chance running Ubuntu? Then Shlomo Swidler's question Python s3cmd only runs from login shell, not during startup sequence might apply exactly:
The s3cmd Python script (this one: http://s3tools.org/s3cmd ) seems to only work when run via an interactive login session, but not when run via scripts during the boot process.
Mitch Garnaat suggests that one should always beware of environmental differences inflicted by executing code within User-Data Scripts:
It's probably related to some difference in your environment when you are logged in as opposed to when the script is running as part of the startup sequence. I have run into similar problems with cron jobs.
This turned out to be the problem indeed, Shlomo Swidler summarizes the 'root cause' and a solution further down in this thread:
Mitch, your comment helped me realize what's different about the
startup sequence: the operative user is root. When I log in, I'm the
"ubuntu" user.
s3cmd looks in the current user's ~/.s3cfg - which didn't exist in
/root/.s3cfg, only in /home/ubuntu/.s3cfg.
Luckily s3cmd allow you to specify the config file's location with
--config /home/ubuntu/.s3cfg .

Related

Shell Script Issue Running Command Remotely using SSH

I have a deploy script in which I want to clear the cache of my CDN. When I am on the server and run my script everything is fine, however when I SSH in and run only that file (i.e. not actually getting into the server, cding into the directory and running it) it fails and states the my doctl command cannot be found. This seems to only be an issue with this program over ssh, running systemctl --help works fine.
Please note that I have installed Digital Ocean's doctl using sudo snap install doctl and it is there.
Here is the .sh file (minus comments):
#!/bin/sh
doctl compute cdn flush [MYID] --files [*] # static cache
So I am not sure what the issue is. Anybody have an idea?
Again, if I get into the server and run the file all works, but here is the SSH command I use that returns the error:
ssh root#123.45.678.999 "/deploy/clear_digital_ocean_cache.sh"
And here is the error.
/deploy/clear_digital_ocean_cache.sh: 10: doctl: not found
Well one solution was to change the command to be an absolute path inside my .sh file like so:
#!/bin/sh
/snap/bin/doctl compute cdn flush [MYID] --files [*] # static cache
I realized that I could run my user commands with ssh (like systemctl) so it was either change where doctl was located (i.e. in the user bin) or ensure that the command was called with an absolute path adding the /snap/bin/ in front of the command.

How can I automate entering input for a command in a bash script that runs on AWS EC2 launch?

For example: upon launching my EC2 instance, I would like to automatically run
docker login
so I can pull a private image from dockerhub and run it. To login to dockerhub I need to input a username and password, and this is what I would like to automate but haven't been able to figure out how.
I do know that you can pass in a script to be ran on launch via User Data. The issue is that my script expects input and I would like to automate entering that input.
Thanks in advance!
If just entering a password for docker login is your problem then I would suggest searching for a manual for docker login. 30 secs on Google gave me this link:
https://docs.docker.com/engine/reference/commandline/login/
It suggests something of the form
docker login --username foo --password-stdin < ~/my_password.txt
Which will read the password from a file my_password.txt in the current users home directory.
Seems like the easiest solution for you here is to modify your script to accept command line parameters, and pass those in with the UserData string.
Keep in mind that this will require you to change your launch configs every time your password changes.
The better solution here is to store your containers in ECS, and let AWS handle the authentication for you (as far as pulling the correct containers from a repo).
Your UserData then turns into something along:
#!/bin/bash
mkdir -p /etc/ecs
rm -f /etc/ecs/ecs.config # cleans up any old files on this instance
echo ECS_LOGFILE=/log/ecs-agent.log >> /etc/ecs/ecs.config
echo ECS_LOGLEVEL=info >> /etc/ecs/ecs.config
echo ECS_DATADIR=/data >> /etc/ecs/ecs.config
echo ECS_CONTAINER_STOP_TIMEOUT=5m >> /etc/ecs/ecs.config
echo ECS_CLUSTER=<your-cluster-goes-here> >> /etc/ecs/ecs.config
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env-file=/etc/ecs/ecs.config amazon/amazon-ecs-agent:latest
You may or may not need all the volumes specified above.
This setup lets the AWS ecs-agent handle your container orchestration for you.
Below is what I could suggest at this moment -
Create a S3 bucket i.e mybucket.
Put a text file(doc_pass.txt) with your password into that S3 bucket
Create a IAM policy which has GET access to just that particular S3 bucket & add this policy to the EC2 instance role.
Put below script in you user data -
aws s3 cp s3://mybucket/doc_pass.txt doc_pass.txt
cat doc_pass.txt | docker login --username=YOUR_USERNAME --password-stdin
This way you just need to make your S3 bucket secure, no secrets gets displayed in the user data.

su to LDAP user in bash script

We have a partial LDAP integration on our RHEL servers. I'm trying to create a setup script to run on new servers. The first thing I need my script to do is log into an LDAP user account so that it's home directory gets created. If I put it in a script like so (and run as root):
#!/bin/bash
su - LDAPaccount
It fails saying the user doesn't exist.
If I just run the su - LDAPacccount command, then it creates the users home directory and switches me to that user. Anyone know why running the su command in a bash script fails and how I can get around this?
For anyone who finds this. My solution was to fully qualify everything:
#!/bin/bash
/bin/su - LDAPaccount -c "/usr/bin/ls"
When I run that, it works fine.

AWS Launch Configuration not picking up user data

We are trying to build an an autoscaling group(lets say AS) configured with an elastic load balancer(lets say ELB) in AWS. The autoscaling group itself is configured with a launch configuration(lets say LC). As far as I could understand from the AWS documentation, pasting a script, as-is, in the user data section of the launch configuration would run that script for every instance launched into an auto scaling group associated with that auto scaling group.
For example pasting this in user data would have a file named configure available in the home folder of a t2 micro ubuntu image:
#!/bin/bash
cd
touch configure
Our end goal is:
Increase instances in auto scaling group, they launch with our startup script and this new instance gets added behind the load balancer tagged with the auto scaling group. But the script was not executed at the instance launch. My questions are:
1. Am i missing something here?
2. What should I do to run our startup script at time of launching any new instance in an auto scaling group?
3. Is there any way to verify if user data was really picked up by the launch?
The direction you are following is right. What is wrong is your user data script.
Problem 1:
What you have to remember is that user data will be executed as user root, not ubuntu. So if your script worked fine, you would find your file in /root/configure, NOT IN /home/ubuntu/configure.
Problem 2:
Your script is actually executing, but it's incorrect and is failing at cd command, thus file is not created.
cd builtin command without any directory given will try to do cd $HOME, however $HOME is NOT SET during cloud-init run, so you have to be explicit here.
Change your script to below and it will work:
#!/bin/bash
cd /root
touch configure
You can also debug issues with your user-data script by inspecting /var/log/cloud-init.log log file, in particular checking for errors in it: grep -i error /var/log/cloud-init.log
Hope it helps!

Is it secure to store EC2 User-Data shell scripts in a private S3 bucket?

I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#

Resources