Running Azure AZ login command with Terraform - bash

Issue:
I am trying to execute a bash script in Terraform and its throwing an error
Environment: I am running Terraform in VScode (terminal is bash) on Windows 10.
I've also tried running in standard git bash command terminal and it throws same error.
I've also tried replacing 'program = ["bash",' with 'program = ["az",' but still throws error.
my bash script
#!/bin/bash
# Exit if any of the intermediate steps fail
set -e
# Login
az login --service-principal -u "${ARM_CLIENT_ID}" -p "${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" >/dev/null
# Extract the query json into variables
eval "$(jq -r '#sh "SUBSCRIPTION_NAME=\(.subscription_name)"')"
# Get the subscription id and pass back map
az account list --query "[?name == '${SUBSCRIPTION_NAME}'].id | {id: join(', ', #)}" --output json
my main.tf file
locals {
access_levels = ["read", "write"]
subscription_name = lower(var.subscription_name)
}
# lookup existing subscription
data "azurerm_subscription" "current" {}
# Lookup Subscription
data "external" "lookupByName" {
# Looks up a subscription by its display name and returns id
program = ["bash", "${path.module}/scripts/lookupByName.sh"]
query = {
subscription_name = local.subscription_name
}
}
throws error after running 'terraform plan'
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.external.lookupByName: Refreshing state...
data.azurerm_subscription.current: Refreshing state...
Error: failed to execute "bash": usage: az login [-h] [--verbose] [--debug] [--only-show-errors]
[--output {json,jsonc,yaml,yamlc,table,tsv,none}]
[--query JMESPATH] [--username USERNAME] [--password PASSWORD]
[--service-principal] [--tenant TENANT]
[--allow-no-subscriptions] [-i] [--use-device-code]
[--use-cert-sn-issuer]
az login: error: Expecting value: line 1 column 1 (char 0)
on main.tf line 10, in data "external" "lookupByName":
10: data "external" "lookupByName" {

I suppose that you are using Windows Subsystem for Linux (WSL) in Windows 10. From your commend, without hardcoding the ARM_CLIENT_SECRET as a variable, you can store the credentials as Environment Variables in WSL like this:
$ export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
$ export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
$ export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
You could read Configuring the Service Principal in Terraform for more details.
However, the environment variable is temporarily valid in the current session in this way. If you want to permanently store its value, you can use a share WSLENV environment variables between Windows and WSL. Starting in 17063, WSLENV begins being supported. WSLENV is case sensitive.
For example,
Firstly, you could set the environment variables in Windows 10,
Secondly, set the WSLENV variable in the CMD.
C:\WINDOWS\system32>setx WSLENV ARM_TENANT_ID/u:ARM_CLIENT_ID/u:ARM_CLIENT_SECRET/u
SUCCESS: Specified value was saved.
Thirdly, re-start your VS code, you could check the current WSL environment variable with export.
At last, you should run terraform plan without such error in WSL in VScode.
For more information, you could refer to the following document,
https://learn.microsoft.com/en-us/windows/wsl/interop
https://devblogs.microsoft.com/commandline/share-environment-vars-between-wsl-and-windows/

Related

How do I pass an Azure resource ID to the az command line in git bash on Windows?

This question is very similar to How do I pass an absolute path to the adb command via git bash for windows?, but the answer to the latter does not work in my case, because //subscriptions/... is not a valid Azure resource Id.
Given:
VMS=( \
"/subscriptions/d...8/resourceGroups/xyz/providers/Microsoft.Compute/virtualMachines/cks-master" \
"/subscriptions/d...8/resourceGroups/xyz/providers/Microsoft.Compute/virtualMachines/cks-worker" \
)
I would like to start the VMs using the following command line:
az vm start --ids ${VMS[#]}
However, it does not work:
~$ az vm start --ids ${VMS[#]} | nocolor
ERROR: invalid resource ID: C:/Program Files/Git/subscriptions/d...8/resourceGroups/xyz/providers/Microsoft.Compute/virtualMachines/cks-master
~$
(nocolor is an alias that drops the ansi color sequences using the approach described in https://superuser.com/a/380778/9089)
The aforementioned SO post suggests to prepend another /, which works for a file path, but does not work for an Azure resource Id:
~$ VMS=( "//subscriptions/d...8/resourceGroups/xyz/providers/Microsoft.Compute/virtualMachines/cks-master" "//subscriptions/d...8/resourceGroups/xyz/providers/Microsoft.Compute/virtualMachines/cks-worker" )
~$ az vm start --ids ${VMS[#]} | nocolor
ERROR: invalid resource ID: //subscriptions/d...8/resourceGroups/xyz/providers/Microsoft.Compute/virtualMachines/cks-master
~$
So, what do we do, besides using Powershell or WSL2?
You can use the MSYS_NO_PATHCONV=1 flag at the beginning of the az cli command.
In your case:
MSYS_NO_PATHCONV=1 az vm start --ids ${VMS[#]}
See:
https://github.com/Azure/azure-cli/blob/dev/doc/use_cli_with_git_bash.md#auto-translation-of-resource-ids
How to stop MinGW and MSYS from mangling path names given at the command line

Self hosted environment variables not available to Github actions

When running Github actions on a self hosted runner machine, how do I access existing custom environment variables that have been set on the machine, in my Github action .yaml script?
I have set those variables and restarted the runner virtual machine several times, but they are not accessible using the $VAR syntax in my script.
If you want to set a variable only for one run, you can add an export command when you configure the self-hosted runner on the Github repository, before running the ./run.sh command:
Example (linux) with a TEST variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Add new variable
$ export TEST="MY_VALUE"
# Last step, run it!
$ ./run.sh
That way, you will be able to access the variable by using $TEST, and it will also appear when running env:
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $VAR
If you want to set a variable permanently, you can add a file to the etc/profile.d/<filename>.sh, as suggested by #frennky above, but you will also have to update the shell for it be aware of the new env variables, each time, before running the ./run.sh command:
Example (linux) with a HTTP_PROXY variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Create new profile http_proxy.sh file
$ sudo touch /etc/profile.d/http_proxy.sh
# Update the http_proxy.sh file
$ sudo vi /etc/profile.d/http_proxy.sh
# Add manually new line in the http_proxy.sh file
$ export HTTP_PROXY=http://my.proxy:8080
# Save the changes (:wq)
# Update the shell
$ bash
# Last step, run it!
$ ./run.sh
That way, you will also be able to access the variable by using $HTTP_PROXY, and it will also appear when running env, the same way as above.
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $HTTP_PROXY
- run: |
cd $HOME
pwd
cd ../..
cat etc/profile.d/http_proxy.sh
The etc/profile.d/<filename>.sh will persist, but remember that you will have to update the shell each time you want to start the runner, before executing ./run.sh command. At least that is how it worked with the EC2 instance I used for this test.
Reference
Inside the application directory of the runner, there is a .env file, where you can put all variables for jobs running on this runner instance.
For example
LANG=en_US.UTF-8
TEST_VAR=Test!
Every time .env changes, restart the runner (assuming running as service)
sudo ./svc.sh stop
sudo ./svc.sh start
Test by printing the variable

unable to set gcloud project using bash script

I am basically trying to set google project ID by calling the bash script file but unable to do so. However, if I run the command separately it works. I am calling the bash script file by activating gcloud shell terminal.
The command: ./init.sh vibrant-brand-298097 vibrant-bucket terraform-trigger /var/dev/dev.tfvars
init.sh:
#!/bin/bash
PROJECT_ID=$1
#Bucket for storing state
BUCKET_NAME=$2
# Based on this value cloud build will set trigger on the test repository
TERRAFORM_TRIGGER=$3
# This is the path to the env vars file, terraform will pick variables from this path for the given env.
TERRAFORM_VAR_FILE_PATH=$4
#Check if all the args were passed
if [ $# -ne 4 ]; then
echo "Not all the argumets were passed"
exit 0
fi
echo "setting project to $PROJECT_ID"
gcloud config set project $PROJECT_ID
echo "Creating bucket $BUCKET_NAME"
gsutil mb -b on gs://$BUCKET_NAME/
Error log:
setting project to
ERROR: (gcloud.config.set) argument VALUE: Must be specified.
Usage: gcloud config set SECTION/PROPERTY VALUE [optional flags]
optional flags may be --help | --installation

Send 2 commands to Docker with 1 being an ouput into the other

I have an Azure CLI container that I have running. I would like to send 2 commands to the container;
Find resources tagged with X : az resource list --tag az=test --query "[].id" -otsv
Delete resources tagged with X: az resource delete --ids $(az resource list --tag az=test --query "[].id" -otsv)
My image/container has environment variables coded into it, so if I were to run any Az command, it will run against the Service Principals saved within it.
If I were to log into the container and run the command in one line, it will work just fine:
λ docker run -it asdf sh
/bin # az resource delete --ids $(az resource list --tag az=test --query "[].id" -otsv)
/bin #
But if I were to run the command outside of the container (or image), it will want me to log into Az CLI:
λ docker run asdf az resource delete --ids $(az resource list --tag az=test --query "[].id" -otsv)
Please run 'az login' to setup account.
ERROR: az resource delete: error: argument --ids: expected at least one argument
usage: az resource delete [-h] [--verbose] [--debug]
[--output {json,jsonc,table,tsv}] [--query JMESPATH]
[--ids RESOURCE_IDS [RESOURCE_IDS ...]]
[--resource-group RESOURCE_GROUP_NAME]
[--namespace RESOURCE_PROVIDER_NAMESPACE]
[--parent PARENT_RESOURCE_PATH]
[--resource-type RESOURCE_TYPE]
[--name RESOURCE_NAME] [--api-version API_VERSION]
[--subscription _SUBSCRIPTION]
It seems bash looks at the $(..) command and doesn't send that through to the image/container. I have tried escaping characters with the \, but it will bring back some other random error where I know -otsv does actually work.
λ docker run asdf az resource delete --ids \$\(az resource list --tag az=test --query "[].id" -ots
v\)
ERROR: az resource delete: 'tsv)' is not a valid value for '--output'. See 'az resource delete --help'.
The most similar choice to 'tsv)' is:
tsv
I'm new to Bash, and I usually use PowerShell, but we have to go with Bash this time. Usually in PowerShell I could pipe the search results into another command to delete the resources, all in one line... but, I've no idea how to do that in this case.
Any ideas, please?
FYI: I will be sending automated commands from Azure Functions to this running container to execute the deletion of said resources, so I can't run an interactive shell.
The error says the reason. If you want to execute the azure Cli in the container, you can connect into the container using the command docker exec -it containerName bash, Or you can do what you do. But all of all, you should log in Azure CLI first.
For your second error, the parameter should be -o tsv.
Update 1
I test the command docker run imageName az resource delete and the result give the only error that please run 'az login' to set up account.
So no matter what you want to do with Azure CLI, you should log in first.
Update 2
To achieve this, you can add & between the two command line. And the whole command will like this:
docker run docker_image_name az login & az resource delete --ids $(az resource list --name resource_name --query "[].id" -o tsv)
Because the command az login of the two will be executed first, so you have to log in first. But don't worry, the second command also will be executed after your login.
I finally get to come back to this after being side tracked. Turns out it was the inverted commas that are needed;
docker run -it asdf bash -c 'az resource delete --ids $(az resource list --tag az=test --query "[].id" -otsv)'
Thanks to this for giving me the idea; Execute two commands with docker exec

What regex or command can I use to trim this output?

I am trying to capture a specific output in a $ variable for my remote server configuration that will run commands one after another.
In an ubuntu environment that has pm2 node package installed, it comes with a command that will output something I need to run.
Command 1:
PM2=$(pm2 startup systemd)
Will output this string when I run echo $PM2:
[PM2] Init System found: systemd [PM2] You have to run this command as root. Execute the following command: sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u username --hp /home/username
I need to capture this exact output as a $var:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u username --hp /home/username
So I can have my cloud init config file run it in the next command.
Command 2:
$PM2
How can I get $PM2 to only have the output value of
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u username --hp /home/username
This may help :
pm2response=$(pm2 startup systemd) # Use lower case for user defined variables
${pm2response#*Execute the following command:} # Shell param expansion
But this assumes that your string has the phrase Execute the following command: in it though I guess I am right in assuming so. Good luck!
Note : More on SHELL Parameter Substitution [ here ]

Resources