Pass files to S3 from EC2 instance - shell

I am having AWS EC2 instance through which i am creating new AWS EC2 instances using command "ec2-run-instances".
This new instance is pre-configured with EC2 command line API and S3cmd API.
While creating instance i am passing user data to new instance in which i have written code for transferring file from that instance to AWS s3 bucket as follows.
s3cmd put res.doc s3://BucketName/DocFiles/res.doc
but it not transfers res.doc to bucket.
After that i came to know that this script uploads that files which are exist on first EC2 instance from which i creates new instances.
So how can i solve this problem?
Script file is here:-
str=$"#! /bin/bash"
str+=$"\ncd /home"
str+=$"\nmkdir pravin"
str+=$"\ns3cmd put res.doc s3://BuckectName/DocFiles/res.csv"
ud=`echo -e "$str" |base64`
ec2-run-instances ami-784c2823 -t t1.micro -g group -n 1 -k key1 -d "$ud"

Related

Assign output from AWS CLI command to 2nd AWS CLI command

I am writing automation task for creating AWS AMI image, the goal is get output from aws import-image (.ova to ami convert) and add the name in 2nd command:
importaskid=$(aws ec2 import-image --disk-containers Format=ova,UserBucket="{S3Bucket=acp17,S3Key=XXXXX.ova}" | jq -r '.ImportTaskId')
aws ec2 create-tags --resources echo $importaskid --tags 'Key=Name, Value=acp_ami_test'
I am able to $importaskid and see needed output but when use aws ec2 create-tags the AMI image created without name and the output from 2nd command is empty.
Appreciate your assistance.
This should work for you:
# set bash variable "importaskid":
importaskid=$(aws ec2 import-image --disk-containers Format=ova,UserBucket="{S3Bucket=acp17,S3Key=XXXXX.ova}" | jq -r '.ImportTaskId')
# Verify that importaskid is set correctly
echo $importaskid
# Now use it:
aws ec2 create-tags --resources "$importaskid" --tags 'Key=Name, Value=acp_ami_test'
The "$()" syntax for assigning the output of a command to a variable is discussed here: https://www.cyberciti.biz/faq/unix-linux-bsd-appleosx-bash-assign-variable-command-output/
The double quotes in "$importaskid" would be necessary if
the value of "$importaskid" happens to have spaces in it.
'Hope that helps!
thanks for reply, so when i run the command without echo $ImportTaskId
see below
aws ec2 create-tags --resource import-ami-XXXXXXXXXXXXX --tags Key=Name,Value='name_ami_test'
i got empty response and the name is not assigned in aws console so i will speak to AWS support/check syntax /check after the assign name to AMI ID and not to import-ami-XXXXXXXXXXXXXXXX

Problem with creating EBS snapshot on server(Linux EC2 instance)

I am working on a task that required to run a script on a server, The script will grab instance id, create snapshot and run yum update -y command and reboot the server.
#!/bin/bash
set -eu
# Set Vars
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export REGION=$(curl --silent http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
export INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
echo $AWS_ACCOUNT_ID
echo $REGION
# Fetch VolumeId
volumeid=$(aws ec2 describe-instances --region $REGION --instance-id "$INSTANCE_ID" --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[].[BlockDeviceMappings[*].{VolumeName:Ebs.VolumeId}]" --output text)
echo $INSTANCE_ID
echo $volumeid
# Create snapshot
aws ec2 create-snapshot --region $REGION --volume-id $volumeid --description "Test-Snapshot-$INSTANCE_ID"
read -p "waiting a while to complete creation of EBS snapshot" -t 100
echo -e "\x1B[01;36m Snapshot has been created \x1B[0m"
I can get the Instance id but when I am trying to create snapshot id from Instance id, I am getting following error:
ERROR
us-east-1
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
Thank you so much in advance for your support.
Your instance, and with that your script is missing the ec2:DescribeInstances permission to run the aws ec2 describe-instances command.
You should attach that permission to the instance role that is assigned to the instance (or create a new role with the permissions attached if there is none assigned yet).
Your IAM permissions do not grant access to DescribeInstances.
If you’re using an IAM role for the instance check it’s policies.
If it’s a user then make sure the credentials are being retrieved, either via aws credentials file or via environment variable

shell script to start ec2 instances and ssh and introducing delay in second command

I have written a small shell script to automate the starting and loggin in to my aws instances via terminal.
#!/bin/bash
aws ec2 start-instances --instance-ids i-070107834ab273992
public_ip=aws ec2 describe-instances --instance-ids i-070107834ab273992 \
--query 'Reservations[*].Instances[*].PublicDnsName' --output text
AWS_KEY="/home/debian/cs605 data management/assignment6/mumbai instance keys"
ssh -v -i "$AWS_KEY"/mumbai-instance-1.pem\
ec2-user#$public_ip
~
~
The problem is public_ip variable I want it to be used in line ssh
1) how do I get value of a variable to use in a command.
2) The instance takes some time to boot when it is switched on from power off to power on so how do I keep checking that instances has been powered on after aws start instance command in the script or retrieve the public ip once it has started fully and then ssh into it.
I am not good at python know just basics so is there a pythonic way of doing it.If there is an example script some where that would be better for me to have a look at it.
You do not set the variable public_ip in your script. It would not surprise me if the script complained about "ec2: command not found".
To set the variable:
public_ip=$(aws ec2 describe-instances --instance-ids i-070107834ab273992 --query 'Reservations[*].Instances[*].PublicDnsName' --output text)
(disclaimer: I have not used aws so I assume that the command is correct).
The information on whether an instance is running should be available with
aws ec2 describe-instance-status
You may want to apply some filters and/or grep for a specific result. You could try polling with a while loop:
while ! aws ec2 describe-instance-statusv --instance-ids i-070107834ab273992 | grep 'something that characterizes running' ; do
sleep 5
done

Script for automation start and stop AWS EC2 instance

I've installed Cloudera cluster on 4 nodes Amazon EC2 instance.
For certain time such as Monday-Friday Night, Saturday, and Sunday, I didn't need to use those 4 nodes Amazon EC2 instance for more effective cost.
How to automate start and stop an those Amazon EC2 instances using script?
Could anybody give me the example of the script to do it?
Thanks,
You can create script to stop and start instance(s) or directly run commands through crontab in linux or schedule-task in windows
for example if you want to stop instance at 11.00 pm
add below line in crontab (you will get this through crontab -e )
0 23 * * * sh stop.sh
format is
m h dom mon dow command
for start instance
aws ec2 start-instances --instance-ids i-1a1234
for stop instance
aws ec2 stop-instances --instance-ids i-1a1234
I have written a small shell script to automate the starting and loggin in to my aws instances via terminal. You can use it
#!/bin/bash
aws ec2 start-instances --instance-ids i-070107834ab273992
public_ip=aws ec2 describe-instances --instance-ids i-070107834ab2 \
--query 'Reservations[*].Instances[*].PublicDnsName' --output text
AWS_KEY="/home/debian/cs605 data management/assignment6/mumbai instance keys"
ssh -v -i "$AWS_KEY"/mumbai-instance-1.pem\
ec2-user#$public_ip
~
~

Is it secure to store EC2 User-Data shell scripts in a private S3 bucket?

I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#

Resources