Assigning output from az artifacts universal download to variable - bash

I have a question. Is it possible to assign output from az aritfacts universal download to variable?
I have a Jenkins job where I have script in shell like this:
az artifacts universal download \
--organization "sampleorganization" \
--project "sampleproject" \
--scope project \
--feed "sample-artifacts" \
--name $PACKAGE \
--version $VERSION \
--debug \
--path .
Then I would like transport the file to artifactory with this:
curl -v -u $ARTIFACTORY_USER:$ARTIFACTORY_PASS -X PUT https://artifactory.com/my/test/repo/my_test_file_{VERSION}
I ran the job but noticed that I passed to artifactory empty file. It created my_test_file_{VERSION} but it had 0 mb. As far as I understand I just created empty file with curl. So I would like to pass the output from az download to artifactory repo. Is it possible? How can I do this?
I understand that I need to assign file output to variable and pass it to the curl like:
$MyVariableToPass = az artifacts universal download output
And then pass this var to curl.
Is it possible? How can I pass files between Jenkins which triggers shell job to artifactory?
Also I am not using any plugin right now.
Please help.

The possible solution is to use a VM as the agent of the Jenkins, and then install the Azure CLI inside the VM. You can run the task in that node. To set a variable with the value of the CLI command, for example, the output looks like this:
{
"name": "test_name",
....
}
Then you can set the variable like this:
name=$(az ...... --query name -o tsv)
This is in the Linux system. If it's in the Windows, you can set it like this:
$name = $(az ...... --query name -o tsv)
And as I know, the command to download the file won't output the content of the file. So if you want to set the content of the file as a variable, it's not suitable.

Related

How to use github environment variables within a run command (github actions)

Set up:
Using Github actions
One of the steps is using an aws send command to run some scripts on an ec2 instance (aws
cli reference here)
The script is just to copy a file from S3 onto the instance
I have an github environment variable which just have a path set as the
value
So code looks like this:
#setting github env variable
env:
PATH: C:\
#Script I'm using in my aws cli command
run: |
aws ssm send-command \
--document-name "AWS-RunShellScript" \
--parameters 'commands=["Copy-S3Object -BucketName BUCKETNAME -Key FILENAME -LocalFolder \"${{ env.PATH }}\" -Force""]' \
This errors with
Error Response: "Copy-S3Object : The given path's format is not supported.
I can see it is running the commands as:
'"Copy-S3Object -BucketName BUCKETNAME -Key FILENAME -LocalFolder \'C:\\' -Force"'
So it is reading the slashes that i included to escape the quotation mark. So it think the filepath is literally \'C:\\'
Note: the quotations are needed round the github variable as it couldn't be read when i tried without
Questions
Can i not use a github environment within a script?
Any ideas on how to still escape the quotations but not have them show up in the script?

Metaplex uploading error. "path" argument must be string

I'm trying to use metaplex to upload NFTs and im having some issues with the uploading.
i'm running this command
ts-node c:/server3/NFT/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload \ -e devnet \ -k C:\server3\NFT\keypair.json \ -cp config.json \ -c example \ c:/server3/NFT/assets
and getting this error
now i know WHY im getting the error, it says because its skipping unsuported file "/server3" which is where the files are located. how do i make it not skip that folder? i believe thats why path is returning undefined.
Windows has a issue with multi line commands. These new lines are indicated with the \ after every parameter. If you remove the extra \ and leave everything on one line it should resolve your issue for you.
ts-node c:/server3/NFT/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload -e devnet -k C:\server3\NFT\keypair.json -cp config.json -c example c:/server3/NFT/assets

Why AWS cli being seen by one bash script but not the other?

I've got 2 bash scripts in one directory. First is ran and executes:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
export PATH=~/bin:$PATH
eval $(aws ecr get-login --region eu-west-2 --no-include-email) # executes OK
second script is ran and executes:
configure_aws_cli() {
aws --version
aws configure set default.region eu-west-2
aws configure set default.output json
echo "AWS configured."
}
configure_aws_cli
How come I get aws: command not found ?
I get this error even when violating DRY like so:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
export PATH=~/bin:$PATH
configure_aws_cli() {
aws --version
aws configure set default.region eu-west-2
aws configure set default.output json
echo "AWS configured."
}
configure_aws_cli
If you just execute a script, it will be executed by child process, and your "export PATH" will die with this child process.
Try to run first process with "." or "source"
. ./first.sh
./second.sh
This happens when you, for whichever reason, run the two scripts with different interpreters.
Your script works with Bash because it expands ~ both after = and in PATH. It fails in e.g. dash, which does neither.
You can make it work in all shells by using $HOME instead of ~:
export PATH="$HOME/bin:$PATH"
If you additionally want changes to PATH to apply to future script, regular source rules apply.
Problem solved by installing AWS command line tool using pip instead of pulling it from public API provided by AWS. No messing around with PATH was necessary.

Use docker environment path variable in bash script

I am trying to create a docker file to run a machine learning algorithm with parameters given in config.json file. Simplified version of my docker command looks like this
docker run --rm -it \
-e "CONFIG=work/algorithms/config.json" \
-e "SRC_TYPE=csv"
--entrypoint /bin/bash \
$(DOCKER_REPO)/$(DOCKER_IMAGE):$(DOCKER_VERSION)
Andy bash script running python command looks like this.
#!/bin/sh
python work/algos/neural_network.py \
--ml_conf "$CONFIG" \
--src_type "$SRC_TYPE" \
--log resources/logs/nn_iris.log
When I use the CONFIG variable in the script like this, It does not work. But the SRC_TYPE variable works. Could you please let me know the right way to use environment variables which contain path.
I think you would like to use the config inside the running docker container. If so you should use docker volumes instead. Please see reference here
for example:
docker run -v /work/algorithms/config.json:/path/to/target -it <image_name:tag>

Env variables not persisting in parent shell

I have the following bash script that attempts to automate the assuming of an AWS role (I've obviously removed the various private settings):
#! /bin/bash
#
# Dependencies:
# brew install jq
#
# Execute:
# source aws-cli-assumerole.sh
unset AWS_SESSION_TOKEN
export AWS_ACCESS_KEY_ID=<user_access_key>
export AWS_SECRET_ACCESS_KEY=<user_secret_key>
temp_role=$(aws sts assume-role \
--role-arn "arn:aws:iam::<aws_account_number>:role/<role_name>" \
--role-session-name "<some_session_name>")
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(echo $temp_role | jq .Credentials.SecretAccessKey)
export AWS_SESSION_TOKEN=$(echo $temp_role | jq .Credentials.SessionToken)
env | grep -i AWS_
I have to execute this script using source because otherwise if I use standard bash or sh the exported environment variables are not available within the parent shell executing this script.
The problem is, even when using source it doesn't work; and by that I mean: the environment variables AND their correct/updated values are showing in the parent shell (if I execute env | grep AWS_ I can see the correct values).
If I then try to use the AWS CLI tools (e.g. aws s3 ls - to list all s3 buckets within the specific account I've assumed the role for) it'll report back that the access key is invalid.
BUT, if I manually copy and paste the environment variable values and re-export them in the parent shell (effectively overwriting them with the exact same values that are already set), then the AWS CLI command will work - but I do not know why. What's different?
jq .Blah will return the output quoted.
So for example
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId)
Will result "KEY" instead what you need is just KEY which is why your xargs works in your comment.
If you use the -r flag for raw with jq you will get the result you want
export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq -r .Credentials.AccessKeyId)
Another way to assume an AWS role:
Write a profile, which automatically assumes the role.
aws configure --profile new-profile set arn:aws:iam::<aws_account_number>:role/<role_name>
To give credentials to the new profile, you must use one of the following lines:
aws configure --profile new-profile set source_profile default
aws configure --profile new-profile set credential_sourceEc2InstanceMetadata
aws configure --profile new-profile set credential_source EcsContainer
Line 1 was correct on my personal pc, because I used the default profile.
Line 3 was correct when I tested the code with AWS CodeBuild. The new profile used the credentials of the codepipeline-role.
Afterwards, you may use the new profile, example:
aws --profile new-profile s3 ls s3://bucket-in-target-account
Documentation: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#using-aws-iam-roles

Resources