Env variables not persisting in parent shell - bash

I have the following bash script that attempts to automate the assuming of an AWS role (I've obviously removed the various private settings):
#! /bin/bash
#
# Dependencies:
# brew install jq
#
# Execute:
# source aws-cli-assumerole.sh
unset AWS_SESSION_TOKEN
export AWS_ACCESS_KEY_ID=<user_access_key>
export AWS_SECRET_ACCESS_KEY=<user_secret_key>
temp_role=$(aws sts assume-role \
--role-arn "arn:aws:iam::<aws_account_number>:role/<role_name>" \
--role-session-name "<some_session_name>")
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(echo $temp_role | jq .Credentials.SecretAccessKey)
export AWS_SESSION_TOKEN=$(echo $temp_role | jq .Credentials.SessionToken)
env | grep -i AWS_
I have to execute this script using source because otherwise if I use standard bash or sh the exported environment variables are not available within the parent shell executing this script.
The problem is, even when using source it doesn't work; and by that I mean: the environment variables AND their correct/updated values are showing in the parent shell (if I execute env | grep AWS_ I can see the correct values).
If I then try to use the AWS CLI tools (e.g. aws s3 ls - to list all s3 buckets within the specific account I've assumed the role for) it'll report back that the access key is invalid.
BUT, if I manually copy and paste the environment variable values and re-export them in the parent shell (effectively overwriting them with the exact same values that are already set), then the AWS CLI command will work - but I do not know why. What's different?

jq .Blah will return the output quoted.
So for example
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId)
Will result "KEY" instead what you need is just KEY which is why your xargs works in your comment.
If you use the -r flag for raw with jq you will get the result you want
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq -r .Credentials.AccessKeyId)

Another way to assume an AWS role:
Write a profile, which automatically assumes the role.
aws configure --profile new-profile set arn:aws:iam::<aws_account_number>:role/<role_name>
To give credentials to the new profile, you must use one of the following lines:
aws configure --profile new-profile set source_profile default
aws configure --profile new-profile set credential_sourceEc2InstanceMetadata
aws configure --profile new-profile set credential_source EcsContainer
Line 1 was correct on my personal pc, because I used the default profile.
Line 3 was correct when I tested the code with AWS CodeBuild. The new profile used the credentials of the codepipeline-role.
Afterwards, you may use the new profile, example:
aws --profile new-profile s3 ls s3://bucket-in-target-account
Documentation: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#using-aws-iam-roles

Related

How to use github environment variables within a run command (github actions)

Set up:
Using Github actions
One of the steps is using an aws send command to run some scripts on an ec2 instance (aws
cli reference here)
The script is just to copy a file from S3 onto the instance
I have an github environment variable which just have a path set as the
value
So code looks like this:
#setting github env variable
env:
PATH: C:\
#Script I'm using in my aws cli command
run: |
aws ssm send-command \
--document-name "AWS-RunShellScript" \
--parameters 'commands=["Copy-S3Object -BucketName BUCKETNAME -Key FILENAME -LocalFolder \"${{ env.PATH }}\" -Force""]' \
This errors with
Error Response: "Copy-S3Object : The given path's format is not supported.
I can see it is running the commands as:
'"Copy-S3Object -BucketName BUCKETNAME -Key FILENAME -LocalFolder \'C:\\' -Force"'
So it is reading the slashes that i included to escape the quotation mark. So it think the filepath is literally \'C:\\'
Note: the quotations are needed round the github variable as it couldn't be read when i tried without
Questions
Can i not use a github environment within a script?
Any ideas on how to still escape the quotations but not have them show up in the script?

Why AWS cli being seen by one bash script but not the other?

I've got 2 bash scripts in one directory. First is ran and executes:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
export PATH=~/bin:$PATH
eval $(aws ecr get-login --region eu-west-2 --no-include-email) # executes OK
second script is ran and executes:
configure_aws_cli() {
aws --version
aws configure set default.region eu-west-2
aws configure set default.output json
echo "AWS configured."
}
configure_aws_cli
How come I get aws: command not found ?
I get this error even when violating DRY like so:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
export PATH=~/bin:$PATH
configure_aws_cli() {
aws --version
aws configure set default.region eu-west-2
aws configure set default.output json
echo "AWS configured."
}
configure_aws_cli
If you just execute a script, it will be executed by child process, and your "export PATH" will die with this child process.
Try to run first process with "." or "source"
. ./first.sh
./second.sh
This happens when you, for whichever reason, run the two scripts with different interpreters.
Your script works with Bash because it expands ~ both after = and in PATH. It fails in e.g. dash, which does neither.
You can make it work in all shells by using $HOME instead of ~:
export PATH="$HOME/bin:$PATH"
If you additionally want changes to PATH to apply to future script, regular source rules apply.
Problem solved by installing AWS command line tool using pip instead of pulling it from public API provided by AWS. No messing around with PATH was necessary.

Why can't I store the output of a command into a shell variable?

I'm retrieving an aws parameter store value using aws ssm command. I get the result back. I need to store this value in a shell variable so that I can use it later on.
This on Mac terminal. I am invoking through AWS CLI to get the aws parameters I get the values from aws however I can't set it to a shell variable due to my lack of knowledge of using Shell
export PASS=echo "$(aws ssm get-parameters --names "/enterprise/org/dev/spring.datasource.password" --with-decryption --query "Parameters[0].Value" | tr -d '"')"
echo $PASS
When I do echo $PASS I expect to see the value of the parameter however I don't get anything. I am sure that the value exist because if I don't export and just run $ aws ssm get-parameters. I see the result.
The right way to assign the output of your command to the variable is:
export PASS=$(aws ssm get-parameters --names "/enterprise/org/dev/spring.datasource.password" --with-decryption --query "Parameters[0].Value" | tr -d '"')
The way you are doing it, this is what shell does:
it sets PASS to "echo"
it runs the aws command pipeline and interprets the output as a command and tries to run it
Related:
How to set a variable to the output of a command in Bash?

Managing multiple AWS accounts on the same computer

I have multiple AWS accounts, and depending on which project directory I'm in I want to use a different one when I type commands into the AWS CLI.
I'm aware that the AWS credentials can be passed in via environmental variables, so I was thinking that one solution would be to make it set AWS_CONFIG_FILE based on which directory it's in, but I'm not sure how to do that.
Using Mac OS X, The AWS version is aws-cli/1.0.0 Python/2.7.1 Darwin/11.4.2, and I'm doing this all for the purpose of utilize AWS in a Rails 4 app.
I recommend using different profiles in the configuration file, and just specify the profile that you want with:
aws --profile <your-profile> <command> <subcommand> [parameters]
If you don't want to type the profile for each command, just run:
export AWS_DEFAULT_PROFILE=<your-profile>
before a group of commands.
If you want to somehow automate the process of setting that environment variable when you change to a directory in your terminal, see Dynamic environment variables in Linux?
I think you can create an alias to aws command, which will export the AWS_CONFIG_FILE variable depending on the directory you are in. Something like following (bash) may work.
First create following shell script, lets call it match.sh and put it in /home/user/
export AWS_CONFIG_FILE=`if [[ "$PWD" =~ "MATCH" ]]; then echo "ABC"; else echo "DEF"; fi`
aws "$#"
Now define alias in ~/.bashrc script
alias awsdirbased="/home/user/match.sh $#"
Now whenever you wanna run "aws" comand instead run "awsdirbased" and it should work

How to pass files from EC2 to S3 and S3 to EC2 using user data

Below code is shell script file which is created for creating New AWS EC2 instance with user data.
In this code, it creates new instance and executes cd /home and creates dirctory named with pravin.
But after that it neither downloads file from s3 nor uploads to S3.
What is wrong with that code(s3cmd get and put code).
And AMI used for this is pre-configured with AWS EC2 command line API and s3cmd.
str=$"#! /bin/bash"
str+=$"\ncd /home"
str+=$"\nmkdir pravin"
str+=$"\ns3cmd get inputFile.txt s3://bucketName/inputFile.txt"
str+=$\ns3cmd put resultFile.txt s3://bucketName/outputFile.txt"
echo "$str"|base64
ud=`echo -e "$str" |base64`
echo "$ud"
export JAVA_HOME=/usr
export EC2_HOME=/home/ec2-api-tools-1.6.7.1
export PATH=$PATH:$EC2_HOME/bin
export AWS_ACCESS_KEY=accesskey
export AWS_SECRET_KEY=secretkey
if [ "$3" = "us-east-1" ]
then
ec2-run-instances ami-fa791231 -t t1.micro -g groupName -n 1 -k Key1 -d "$ud" --region $3 --instance-initiated-shutdown-behavior terminate
else
echo "Not Valid region"
fi
There is a problem with your "s3cmd get" command. You have the parameter order backwards. From the "s3cmd --help" output:
Put file into bucket
s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
Get file from bucket
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
You can see that you need to change your get command to:
str+=$"\ns3cmd get s3://bucketName/inputFile.txt inputFile.txt"
Note that the s3:// URI comes first, before the file name. That should fix that issue. Your code also appears to be missing a quote for the put command:
str+=$"\ns3cmd put resultFile.txt s3://bucketName/outputFile.txt"

Resources