How to use github environment variables within a run command (github actions) - bash

Set up:
Using Github actions
One of the steps is using an aws send command to run some scripts on an ec2 instance (aws
cli reference here)
The script is just to copy a file from S3 onto the instance
I have an github environment variable which just have a path set as the
value
So code looks like this:
#setting github env variable
env:
PATH: C:\
#Script I'm using in my aws cli command
run: |
aws ssm send-command \
--document-name "AWS-RunShellScript" \
--parameters 'commands=["Copy-S3Object -BucketName BUCKETNAME -Key FILENAME -LocalFolder \"${{ env.PATH }}\" -Force""]' \
This errors with
Error Response: "Copy-S3Object : The given path's format is not supported.
I can see it is running the commands as:
'"Copy-S3Object -BucketName BUCKETNAME -Key FILENAME -LocalFolder \'C:\\' -Force"'
So it is reading the slashes that i included to escape the quotation mark. So it think the filepath is literally \'C:\\'
Note: the quotations are needed round the github variable as it couldn't be read when i tried without
Questions
Can i not use a github environment within a script?
Any ideas on how to still escape the quotations but not have them show up in the script?

Related

Assigning output from az artifacts universal download to variable

I have a question. Is it possible to assign output from az aritfacts universal download to variable?
I have a Jenkins job where I have script in shell like this:
az artifacts universal download \
--organization "sampleorganization" \
--project "sampleproject" \
--scope project \
--feed "sample-artifacts" \
--name $PACKAGE \
--version $VERSION \
--debug \
--path .
Then I would like transport the file to artifactory with this:
curl -v -u $ARTIFACTORY_USER:$ARTIFACTORY_PASS -X PUT https://artifactory.com/my/test/repo/my_test_file_{VERSION}
I ran the job but noticed that I passed to artifactory empty file. It created my_test_file_{VERSION} but it had 0 mb. As far as I understand I just created empty file with curl. So I would like to pass the output from az download to artifactory repo. Is it possible? How can I do this?
I understand that I need to assign file output to variable and pass it to the curl like:
$MyVariableToPass = az artifacts universal download output
And then pass this var to curl.
Is it possible? How can I pass files between Jenkins which triggers shell job to artifactory?
Also I am not using any plugin right now.
Please help.
The possible solution is to use a VM as the agent of the Jenkins, and then install the Azure CLI inside the VM. You can run the task in that node. To set a variable with the value of the CLI command, for example, the output looks like this:
{
"name": "test_name",
....
}
Then you can set the variable like this:
name=$(az ...... --query name -o tsv)
This is in the Linux system. If it's in the Windows, you can set it like this:
$name = $(az ...... --query name -o tsv)
And as I know, the command to download the file won't output the content of the file. So if you want to set the content of the file as a variable, it's not suitable.

Why AWS cli being seen by one bash script but not the other?

I've got 2 bash scripts in one directory. First is ran and executes:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
export PATH=~/bin:$PATH
eval $(aws ecr get-login --region eu-west-2 --no-include-email) # executes OK
second script is ran and executes:
configure_aws_cli() {
aws --version
aws configure set default.region eu-west-2
aws configure set default.output json
echo "AWS configured."
}
configure_aws_cli
How come I get aws: command not found ?
I get this error even when violating DRY like so:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
export PATH=~/bin:$PATH
configure_aws_cli() {
aws --version
aws configure set default.region eu-west-2
aws configure set default.output json
echo "AWS configured."
}
configure_aws_cli
If you just execute a script, it will be executed by child process, and your "export PATH" will die with this child process.
Try to run first process with "." or "source"
. ./first.sh
./second.sh
This happens when you, for whichever reason, run the two scripts with different interpreters.
Your script works with Bash because it expands ~ both after = and in PATH. It fails in e.g. dash, which does neither.
You can make it work in all shells by using $HOME instead of ~:
export PATH="$HOME/bin:$PATH"
If you additionally want changes to PATH to apply to future script, regular source rules apply.
Problem solved by installing AWS command line tool using pip instead of pulling it from public API provided by AWS. No messing around with PATH was necessary.

Why can't I store the output of a command into a shell variable?

I'm retrieving an aws parameter store value using aws ssm command. I get the result back. I need to store this value in a shell variable so that I can use it later on.
This on Mac terminal. I am invoking through AWS CLI to get the aws parameters I get the values from aws however I can't set it to a shell variable due to my lack of knowledge of using Shell
export PASS=echo "$(aws ssm get-parameters --names "/enterprise/org/dev/spring.datasource.password" --with-decryption --query "Parameters[0].Value" | tr -d '"')"
echo $PASS
When I do echo $PASS I expect to see the value of the parameter however I don't get anything. I am sure that the value exist because if I don't export and just run $ aws ssm get-parameters. I see the result.
The right way to assign the output of your command to the variable is:
export PASS=$(aws ssm get-parameters --names "/enterprise/org/dev/spring.datasource.password" --with-decryption --query "Parameters[0].Value" | tr -d '"')
The way you are doing it, this is what shell does:
it sets PASS to "echo"
it runs the aws command pipeline and interprets the output as a command and tries to run it
Related:
How to set a variable to the output of a command in Bash?

AWS CLI S3 cp Not Recognizing Quoted Source and Target in Shell Script

New to shell scripting. Trying to use shell script on RHEL 6.9 linux server to upload a file with whitespace in filename to AWS S3 with aws cli. I have tried single and double quotes and have been reading aws cli links like http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html
Here is a simple version of my script with the problem:
#!/bin/bash
profile=" --profile XXXXXXX"
sourcefile=" '/home/my_login/data/batch4/Test File (1).zip'"
targetobject=" 's3://my-bucket/TestFolder/batch4/Test File (1).zip'"
service=" s3"
action=" cp"
encrypt=" --sse"
func="aws"
awsstring=$func$profile$service$action$sourcefile$targetobject$encrypt
echo $awsstring
$awsstring
When I run I get:
$ ./s3copy.sh
aws --profile XXXXXXX s3 cp '/home/my_login/data/batch4/Test File (1).zip' 's3://my-bucket/TestFolder/batch4/Test File (1).zip' --sse
Unknown options: (1).zip','s3://my-bucket/TestFolder/batch4/Test,File,(1).zip'
When I execute the $awsstring value from command line, it works:
$ aws --profile XXXXXXX s3 cp '/home/my_login/data/batch4/Test File (1).zip' 's3://my-bucket/TestFolder/batch4/Test File (1).zip' --sse
upload: data/batch4/Test File (1).zip to s3://my-bucket/TestFolder/batch4/Test File (1).zip
aws cli does not seem to recognize the quotes in the shell script. I need to quote the file names in my script, because I have white space in them.
Question: Why does the string execute correctly from the command line, but not from within the shell script?
Use eval $awsstring. I faced similar issue. You can look at my answer - https://stackoverflow.com/a/47111888/2396539
On second thought having space in file name is not desirable and if you can control it , avoid it in the 1st place.

Env variables not persisting in parent shell

I have the following bash script that attempts to automate the assuming of an AWS role (I've obviously removed the various private settings):
#! /bin/bash
#
# Dependencies:
# brew install jq
#
# Execute:
# source aws-cli-assumerole.sh
unset AWS_SESSION_TOKEN
export AWS_ACCESS_KEY_ID=<user_access_key>
export AWS_SECRET_ACCESS_KEY=<user_secret_key>
temp_role=$(aws sts assume-role \
--role-arn "arn:aws:iam::<aws_account_number>:role/<role_name>" \
--role-session-name "<some_session_name>")
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(echo $temp_role | jq .Credentials.SecretAccessKey)
export AWS_SESSION_TOKEN=$(echo $temp_role | jq .Credentials.SessionToken)
env | grep -i AWS_
I have to execute this script using source because otherwise if I use standard bash or sh the exported environment variables are not available within the parent shell executing this script.
The problem is, even when using source it doesn't work; and by that I mean: the environment variables AND their correct/updated values are showing in the parent shell (if I execute env | grep AWS_ I can see the correct values).
If I then try to use the AWS CLI tools (e.g. aws s3 ls - to list all s3 buckets within the specific account I've assumed the role for) it'll report back that the access key is invalid.
BUT, if I manually copy and paste the environment variable values and re-export them in the parent shell (effectively overwriting them with the exact same values that are already set), then the AWS CLI command will work - but I do not know why. What's different?
jq .Blah will return the output quoted.
So for example
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId)
Will result "KEY" instead what you need is just KEY which is why your xargs works in your comment.
If you use the -r flag for raw with jq you will get the result you want
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq -r .Credentials.AccessKeyId)
Another way to assume an AWS role:
Write a profile, which automatically assumes the role.
aws configure --profile new-profile set arn:aws:iam::<aws_account_number>:role/<role_name>
To give credentials to the new profile, you must use one of the following lines:
aws configure --profile new-profile set source_profile default
aws configure --profile new-profile set credential_sourceEc2InstanceMetadata
aws configure --profile new-profile set credential_source EcsContainer
Line 1 was correct on my personal pc, because I used the default profile.
Line 3 was correct when I tested the code with AWS CodeBuild. The new profile used the credentials of the codepipeline-role.
Afterwards, you may use the new profile, example:
aws --profile new-profile s3 ls s3://bucket-in-target-account
Documentation: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#using-aws-iam-roles

Resources