How do I prompt for an MFA key to generate and use credentials for AWS CLI access? - bash

I have several Bash scripts that invoke AWS CLI commands for which permissions have changed to require MFA, and I want to be able to prompt for a code generated by my MFA device in these scripts so that they can run with the necessary authentication.
But there seems to be no simple built in way to do this. The only documentation I can find involves a complicated process of using aws sts get-session-token and then saving each value in a configuration, which it is then unclear how to use.
To be clear what I'd like is that when I run one of my scripts that that contains AWS CLI commands that require MFA, I'm simply prompted for the code, so that providing it allows the AWS CLI operations to complete. Something like:
#!/usr/bin/env bash
# (1) prompt for generated MFA code
# ???
# (2) use entered code to generate necessary credentials
aws sts get-session-token ... --token-code $ENTERED_VALUE
# (3) perform my AWS CLI commands requiring MFA
# ....
It's not clear to me how to prompt for this when needed (which is probably down to not being proficient with bash) or how to use the output of get-session-token once I have it.
Is there a way to do what I'm looking for?
I've tried to trigger a prompt by specifying a --profile with a mfa_serial entry; but that doesn't work either.

Ok after spending more time on this script with a colleague - we have come up with a much simpler script. This does all the credential file work for you , and is much easier to read. It also allows for all your environments new tokens to be in the same creds file. The initial call to get you MFA requires your default account keys in the credentials file - then it generates your MFA token and puts them back in the credentials file.
#!/usr/bin/env bash
function usage {
echo "Example: ${0} dev 123456 "
exit 2
}
if [ $# -lt 2 ]
then
usage
fi
MFA_SERIAL_NUMBER=$(aws iam list-mfa-devices --profile bh${1} --query 'MFADevices[].SerialNumber' --output text)
function set-keys {
aws configure set aws_access_key_id ${2} --profile=${1}
aws configure set aws_secret_access_key ${3} --profile=${1}
aws configure set aws_session_token ${4} --profile=${1}
}
case ${1} in
dev|qa|prod) set-keys ${1} $(aws sts get-session-token --profile bh${1} --serial-number ${MFA_SERIAL_NUMBER} --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text --token-code ${2});;
*) usage ;;
esac

Inspired by #strongjz and #Nick answers, I wrote a small Python command to which you can pipe the output of the aws sts command.
To install:
pip install sts2credentials
To use:
aws sts get-session-token \
--serial-number arn:aws:iam::123456789012:mfa/your-iam-user \
--token-code 123456 \
--profile=your-profile-name \
| sts2credentials
This will automatically add the access key ID, the secret access key, and the session token under a new "sts" profile in your ~/.aws/credentials file.

For bash you could read in the value, then set those values from the sts output
echo "Type the mfa code that you want to use (4 digits), followed by [ENTER]:"
read ENTERED_VALUE
aws sts get-session-token ... --token-code $ENTERED_VALUE
then you'll have to parse the output of the sts call which has the access key, secret and session token.
{
Credentials: {
AccessKeyId: "ASIAJPC6D7SKHGHY47IA",
Expiration: 2016-06-05 22:12:07 +0000 UTC,
SecretAccessKey: "qID1YUDHaMPet5xw/vpw1Wk8SKPilFihdiMSdSIj",
SessionToken: "FQoDYXdzEB4aDLwmzouEQ3eckfqJxyLOARbBGasdCaAXkZ7ABOcOCNx2/7sS8N7A6Dpcax/t2G8KNTcUkRLdxI0gTvPoKQeZrH8wUrL4UxFFP6kCWEasdVIBAoUfuhdeUa1a7H216Mrfbbv3rMGsVKUoJT2Ar3r0pYgsYxizOWzH5VaA4rmd5gaQvfSFmasdots3WYrZZRjN5nofXJBOdcRd6J94k8m5hY6ClfGzUJEqKcMZTrYkCyUu3xza2S73CuykGM2sePVNH9mGZCWpTBcjO8MrawXjXj19UHvdJ6dzdl1FRuKdKKeS18kF"
}
}
then set them
aws configure set aws_access_key_id default_access_key --profile NAME_PROFILE
aws configure set aws_secret_access_key default_secret_key --profile NAME_PROFILE
aws configure set default.region us-west-2 --profile
aws some_commmand --profile NAME_PROFILE
http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_08_02.html
AWS STS API Reference
http://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html
AWS CLI STS Command
http://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html
I wrote something very similar to what you are trying to in Go, here but this is for the sts assumerole not get-session-token.

I wrote a simple script to set the AWS credentials file for a profile called mfa. Then all bash scripts you write just need to have the "--profile mfa" added so they will just work. This also allows for multiple AWS accounts - as many of us have those these days. I'm sure this can be improved - but it was quick and dirty and does what you want and everything I need.
You will have to amend facts in the script to fit your account details - I have marked them clearly with chevrons < >. NB Obviously once you have populated the script with all your details it is not to be copied about - unless you want unintended consequences. This uses recursion within the credentials file - as the standard access keys are called each time to create the mfa security tokens.
#!/bin/bash
# Change for your username - would be /home/username on Linux/BSD
dir='/Users/<your-user-name>'
region=us-east-1
function usage {
echo "Must enter mfa token and then either dev/qa/prod"
echo "i.e. mfa-set-aws-profile.sh 123456 qa"
exit 2
}
if [[ $1 == "" ]]
then
echo "Must give me a token - how do you expect this to work - DOH :-)"
usage
exit 2
fi
# Write the output from sts command to a json file for parsing
# Just add accounts below as required
case $2 in
dev) aws sts get-session-token --profile dev --serial-number arn:aws:iam::<123456789>:mfa/<john.doe> --token-code $1 > $dir/mfa-json;;
qa) aws sts get-session-token --profile qa --serial-number arn:aws:iam::<123456789>:mfa/<john.doe> --token-code $1 > $dir/mfa-json;;
-h) usage ;;
*) usage ;;
esac
# Remove quotes and comma's to make the file easier to parse -
# N.B. gsed is for OSX - on Linux/BSD etc sed should be just fine.
/usr/local/bin/gsed -i 's/\"//g;s/\,//g' $dir/mfa-json
# Parse the mfa info into vars for use in aws credentials file
seckey=`cat $dir/mfa-json | grep SecretAccessKey | gsed -E 's/[[:space:]]+SecretAccessKey\: //g'`
acckey=`cat $dir/mfa-json | grep AccessKeyId | gsed 's/[[:space:]]+AccessKeyId\: //g'`
sesstok=`cat $dir/mfa-json | grep SessionToken | gsed 's/[[:space:]]+SessionToken\: //g'`
# output all the gathered info into your aws credentials file.
cat << EOF > $dir/.aws/credentials
[default]
aws_access_key_id = <your normal keys here if required>
aws_secret_access_key = <your normal keys here if required>
[dev]
aws_access_key_id = <your normal keys here >
aws_secret_access_key = <your normal keys here >
[qa]
aws_access_key_id = <your normal keys here >
aws_secret_access_key = <your normal keys here >
[mfa]
output = json
region = $region
aws_access_key_id = $acckey
aws_secret_access_key = $seckey
aws_session_token = $sesstok
EOF

Related

Error trying to get command status with aws cli

I seem to keep getting an error whenever I try to use bash to automate getting the status of a job.
My current bash script currently looks like this:
#!/bin/bash
aws ec2 start-instances --instance-ids=$1;
start=$(aws ec2 describe-instance-status --instance-id $1)
status=$(echo $start | jq '.InstanceStatuses[0].InstanceState.Name')
#wait for ec2 instance to start running before launching command
while [ "$status" != "\"running\"" ]
do
start=$(aws ec2 describe-instance-status --instance-id $1)
status=$(echo $star | jq '.InstanceStatueses[0].InstanceState.Name')
done
sh_command_id=$(aws ssm send-command --instance-ids=$1 --document-name "AWS-RunShellScript" --parameters 'commands=["echo Helloworld","sleep 60"]');
command_id=$(echo $sh_command_id | jq '.Command.CommandId')
full_status=$(aws ssm list-commands --command-id $command_id)
echo $command_id;
aws ec2 stop-instances --instance-ids=$1;
When the script gets to aws ssm list-commands --command-id $command_id I get this error.
An error occurred (ValidationException) when calling the ListCommands operation: 2
validation errors detected: Value '"67fb9aed-00bf-4741-ae1a-736ddbfba498"' at 'commandId'
failed to satisfy constraint: Member must satisfy regular expression pattern: ^[A-Fa-
f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$.; Value
'"67fb9aed-00bf-4741-ae1a-736ddbfba498"' at 'commandId' failed to satisfy constraint:
Member must have length less than or equal to 36.
When running everything individually in terminal I get the same error. However, I do not get an error when I mannually type in the commandId as so: full_status=$(aws ssm list-commands --command-id 67fb9aed-00bf-4741-ae1a-736ddbfba498).
Is there some aws formatting I am missing here?
You might be able to avoid the use of jq by using the aws cli built in --query 'your.json.query' to specify your JSON query and then the --output text to return plain text. It has been a while since I checked so your mileage may vary.
I was able to verify that the following works for checking an ec2 is running:
check_instance() {
local instance_id="${1}"
local status="_"
while [ "${status}" != "running" ] ; do
status=$(aws ec2 describe-instance-status \
--instance-ids ${instance_id} \
--query "InstanceStatuses[*].InstanceState.Name" \
--output text)
done
echo "${instance_id} is running"
}

Getting value of a variable value in azure pipeline

enviornment: 'dev'
acr-login: $(enviornment)-acr-login
acr-secret: $(enviornment)-acr-secret
dev-acr-login and dev-acr-secret are secrets stored in keyvault for acr login and acr secret.
In Pipeline, getting secrets with this task
- task: AzureKeyVault#1
inputs:
azureSubscription: $(connection)
KeyVaultName: $(keyVaultName)
SecretsFilter: '*'
This task will create task variables with name 'dev-acr-login' and 'dev-acr-secret'
Not if I want to login in docker I am not able to do that
Following code works and I am able to login into acr.
- bash: |
echo $(dev-acr-secret) | docker login \
$(acrName) \
-u $(dev-acr-login) \
--password-stdin
displayName: 'docker login'
Following doesnot work. Is there a way that I can use variable names $(acr-login) and $(acr-secret) rather than actual keys from keyvault?
- bash: |
echo $(echo $(acr-secret)) | docker login \
$(acrRegistryServerFullName) \
-u $(echo $(acr-login)) \
--password-stdin
displayName: 'docker login'
You could pass them as environment variables:
- bash: |
echo $(echo $ACR_SECRET) | ...
displayName: docker login
env:
ACR_SECRET: $(acr-secret)
But what is the purpose, as opposed to just echoing the password values as you said works in the other example? As long as the task is creating secure variables, they will be protected in logs. You'd need to do that anyway, since they would otherwise show up in diagnostic logs if someone enabled diagnostics, which anyone can do.
An example to do that:
- bash: |
echo "##vso[task.setvariable variable=acr-login;issecret=true;]$ACR_SECRET"
env:
ACR_SECRET: $($(acr-secret)) # Should expand recursively
See Define variables : Set secret variables for more information and examples.

Disable scheduling on second instance of same project on AWS

I have 2 instances of the same deployment/project on AWS Elastic Beanstalk.
Both contain a Laravel project which contains scheduling code which runs various commands which can be found in the schedule method/function of the Kernel.php class within 'app/Console' - the problem I have is that if a command runs from one instance then it will also run the command from the second instance which is not what I want to happen.
What I would like to happen is that the commands get run from only one instance and not the other. How do I achieve this in the easiest way possible?
Is there a Laravel package which could help me achieve this?
From Laravel 5.6:
Laravel provides a onOneServer method which you can use if your applications share a single cache server. You could use something like ElastiCache to host Redis or Memcached and use it as your cache server for both of your application instances. Then you would be able to use the onOneServer method like this:
$schedule->command('report:generate')
->fridays()
->at('17:00')
->onOneServer();
For older versions of Laravel:
You could use the jdavidbakr/multi-server-event package. Once you have it set up you should be able to use it like:
$schedule->command('inspire')
->daily()
->withoutOverlappingMultiServer();
I had the same issue to run some cronjobs (nothing related to Laravel) and I found a nice solution (don't remember where I found it)
What I do is check if the instance running the code is the first instance on the Auto Scaling Group, if it's the first then I execute the command otherwise just exit.
This is the way it's implemented:
#!/bin/bash
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id 2>/dev/null`
REGION=`curl -s http://169.254.169.254/latest/dynamic/instance-identity/document 2>/dev/null | jq -r .region`
# Find the Auto Scaling Group name from the Elastic Beanstalk environment
ASG=`aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" \
--region $REGION --output json | jq -r '.[][] | select(.Key=="aws:autoscaling:groupName") | .Value'`
# Find the first instance in the Auto Scaling Group
FIRST=`aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG \
--region $REGION --output json | \
jq -r '.AutoScalingGroups[].Instances[] | select(.LifecycleState=="InService") | .InstanceId' | sort | head -1`
# If the instance ids are the same exit 0
[ "$FIRST" = "$INSTANCE_ID" ]
Try implementing those calls using PHP and it should work.

How to add tags to Buildbot EC2LatentBuildSlave

I am using EC2LatentBuildSlave to sawn EC2 instances running the build slaves. I would like to tag the slaves so a tag is visible in the EC2 dashboard Tags tab.
I am passing
tags={'Key':'BuildbotType', 'Value':'slaveName'}
but I can’t see the tags in the spawned EC2 instance.
Have I misunderstood this parameter?
Thank for your help.
This seems like a bug in buildbot version 0.8.12. Unfortunately we can't upgrade at this point. Instead I installed a init script (service) on the base AMI that reads user_data and tags its self. I pass the tag details using the user_data field in EC2LatentBuildSlave.
login_aws() {
# Some code to configer aws access keys.
}
get_userdata() {
AWS_USERDATA=$(curl -s http://169.254.169.254/latest/user-data)
echo $AWS_USERDATA
}
tag_self_for_identification() {
login_aws
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
if [ -z "$INSTANCE_ID" ]; then
echo "Error unable to obtain instance id"
return 0
fi
USERDATA=get_userdata
set $USERDATA
NAME="$2 $3"
if [ ! -z "$NAME" ]; then
echo "aws ec2 create-tags --resources $INSTANCE_ID --tags Key=Name,Value=$NAME"
aws ec2 create-tags --resources $INSTANCE_ID --tags Key=Name,Value="$NAME" \
Key=Buildbot,Value=Latentslave
fi
}
tag_self_for_identification
This is an extremely over engineered solution. However I need tags to work for some lambda functions to be able to identify these latentlsaves

knife vsphere requests root password - is unattended execution possible?

Is there any way to ruyn the knife vsphere for unattended execution? I have a deploy shell script which I am using to help me:
cat deploy-production-20-vm.sh
#!/bin/bash
##############################################
# These are machine dependent variables (need to change)
##############################################
HOST_NAME=$1
IP_ADDRESS="$2/24"
CHEF_BOOTSTRAP_IP_ADDRESS="$2"
RUNLIST=\"$3\"
CHEF_HOST= $HOSTNAME.my.lan
##############################################
# These are psuedo-environment independent variables (could change)
##############################################
DATASTORE="dcesxds04"
##############################################
# These are environment dependent variables (should not change per env)
##############################################
TEMPLATE="\"CentOS\""
NETWORK="\"VM Network\""
CLUSTER="ProdCluster01" #knife-vsphere calls this a resource pool
GATEWAY="10.7.20.1"
DNS="\"10.7.20.11,10.8.20.11,10.6.20.11\""
##############################################
# the magic
##############################################
VM_CLONE_CMD="knife vsphere vm clone $HOST_NAME \
--template $TEMPLATE \
--cips $IP_ADDRESS \
--vsdc MarkleyDC\
--datastore $DATASTORE \
--cvlan $NETWORK\
--resource-pool $CLUSTER \
--cgw $GATEWAY \
--cdnsips $DNS \
--start true \
--bootstrap true \
--fqdn $CHEF_BOOTSTRAP_IP_ADDRESS \
--chost $HOST_NAME\
--cdomain my.lan \
--run-list=$RUNLIST"
echo $VM_CLONE_CMD
eval $VM_CLONE_CMD
Which echos (as a single line):
knife vsphere vm clone dcbsmtest --template "CentOS" --cips 10.7.20.84/24
--vsdc MarkleyDC --datastore dcesxds04 --cvlan "VM Network"
--resource-pool ProdCluster01 --cgw 10.7.20.1
--cdnsips "10.7.20.11,10.8.20.11,10.6.20.11" --start true
--bootstrap true --fqdn 10.7.20.84 --chost dcbsmtest --cdomain my.lan
--run-list="role[my-env-prod-server]"
When it runs it outputs:
Cloning template CentOS Template to new VM dcbsmtest
Finished creating virtual machine dcbsmtest
Powered on virtual machine dcbsmtest
Waiting for sshd...done
Doing old-style registration with the validation key at /home/me/chef-repo/.chef/our-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 10.7.20.84
root#10.7.20.84's password:
If I step away form my desk and it prompts for PWD - then sometimes it times out and the connection is lost and chef doesn't bootstrap. Also I would like to be able to automate all of this to be elastic based on system needs - which won't work with attended execution.
The idea I am going to run with, unless provided a better solution is to have a default password in the template and pass it on the command line to knife, and have chef change the password once the build is complete, minimizing the exposure of a hard coded password in the bash script controlling knife...
Update: I wanted to add that this is working like a charm. Ideally we could have changed the centOs template we were deploying - but it wasn't possible here - so this is a fine alternative (as we changed the root password after deploy anyhow).

Resources