Github Actions - Terraform Init - Too many command line arguments - bash

I'm currently working on a pipeline where I'm using some backend to ensure my terraform forms will run but the problem that I have is that I'm getting the following issue:
Too many command-line arguments. Did you mean to use -chdir?
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.
My current command is the following:
- name: Terraform Init
run: |
terraform init -backend=true -input=false \
-backend-config=subscription_id=${{ secrets.AZURE_SUBS_ID}} \
-backend-config=resource_group_name={{ secrets.AZURE_RG }} \
-backend-config=storage_accname=${{ SECRETS.AZURE_AD_STORAGE_ACCOUNT }} \
-backend-config=container_name="tfstate" \
-backend-config=tenant_id=${{ secrets.AZURE_TENANT_ID }} \
-backend-config=client_id=${{ secrets.AZURE_CLIENT_ID }} \
-backend-config=client_secret=${{ secrets.AZURE_CSECRET }}
By using my terminal, I can use a file like this, and I'm able to set up all my environment variables on my local machine. However, when I'm using the pipelines it seems that I can't do that. Does anyone know what is the best thing to do?

This is most likely a YAML formatting issue here as the Terraform argument parsing logic is throwing the error. The | inserts newline characters at the end of each line. You need > instead. The \ are also unnecessary as this is a multi-line YAML string, and not a multi-line shell interpreter command:
- name: Terraform Init
run: >
terraform init -backend=true -input=false
-backend-config=subscription_id=${{ secrets.AZURE_SUBS_ID}}
-backend-config=resource_group_name={{ secrets.AZURE_RG }}
-backend-config=storage_accname=${{ SECRETS.AZURE_AD_STORAGE_ACCOUNT }}
-backend-config=container_name="tfstate"
-backend-config=tenant_id=${{ secrets.AZURE_TENANT_ID }}
-backend-config=client_id=${{ secrets.AZURE_CLIENT_ID }}
-backend-config=client_secret=${{ secrets.AZURE_CSECRET }}

Related

How to use an anchor to prevent repetition of code sections?

Say I have a number of jobs that all do similar series of scripts, but need a few variables that change between them:
test a:
stage: test
tags:
- a
interruptible: true
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t a -f Dockerfile.a .
test b:
stage: test
tags:
- b
interruptible: true
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t b -f Dockerfile.b .
All I need is to be able to define e.g.
- docker build -t ${WHICH} -f Dockerfile.${which} .
If only I could make an anchor like:
.x: &which_ref
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t $WHICH -f Dockerfile.$WHICH .
And include it there:
test a:
script:
- export WHICH=a
<<: *which_ref
This doesn't work and in a yaml validator I get errors like
Error: YAMLException: cannot merge mappings; the provided source object is unacceptable
I also tried making an anchor that contains some entries under script inside of it:
.x: &which_ref
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t $WHICH -f Dockerfile.$WHICH .
This means I have to include it from one step higher up. So this does not error, but all this accomplishes is cause the later declared script section to override the first one.
So I'm losing hope. It seems like I will just need to abstract the sections away into their own shell scripts and call them with arguments or whatever.
The YAML merge key << is a non-standard extension for YAML 1.1, which has been superseded by YAML 1.2 about 14 years ago. Usage is discouraged.
The merge key works on mappings, not on sequences. It cannot deep-merge. Thus what you want to do is not possible to implement with it.
Generally, YAML isn't designed to process data, it just loads it. The merge key is an outlier and didn't find its way into the standard for good reasons. You need a pre- or postprocessor to do complex processing, and Gitlab CI doesn't offer anything besides simple variable expension, so you're out of luck.

Github Actions - Export JSON object from CURL response to ENV variable

Trying to save API responses in their original JSON format as env variables. Getting formatting errors.
echo "GIT_PR_OBJECT=$(curl -u USER:TOKEN https://api.github.com/repos/${{ github.repository }}/pulls/${{ github.event.number }})" >> $GITHUB_ENV
Error from Github:
Error: Unable to process file command 'env' successfully.
Error: Invalid environment variable format ' "sha": "123456789",'
My plan is to access this object multiple times throughout the workflow, that way I only need to make one call to Github API rather than few. Here is how I access the data object saved in the env var.
echo "GIT_EMAIL=$(${{ env.GIT_COMMIT_OBJECT }} | jq -r '.commit.author.email')" >> $GITHUB_ENV
How can I save my curl response as an env variable in it's JSON format?
Why not save json content to file instead? Saving to environment variable, while sounds fancy, also smells like a can of worms you don't have to deal with in a first place.
Taking your example:
- run: |
curl -u USER:TOKEN https://api.github.com/repos/${{ github.repository }}/pulls/${{ github.event.number }} > $HOME/pr.json
echo "GIT_EMAIL=$(jq -r '.commit.author.email' $HOME/pr.json)" >> $GITHUB_ENV
shell: bash
While it is a working solution in my eyes, i fail to understand why it has to be environment variable, which is your main question here.

How to set variables using terragrunt before_hook

I need to use some gcloud commands in order to create a Redis instance on GCP as terraform does not support some options that I need.
I'm trying this:
terraform {
# Before apply, run script.
before_hook "create_redis_script" {
commands = ["apply"]
execute = ["REDIS_REGION=${local.module_vars.redis_region}", "REDIS_PROJECT=${local.module_vars.redis_project}", "REDIS_VPC=${local.module_vars.redis_vpc}", "REDIS_PREFIX_LENGHT=${local.module_vars.redis_prefix_lenght}", "REDIS_RESERVED_RANGE_NAME=${local.module_vars.redis_reserved_range_name}", "REDIS_RANGE_DESCRIPTION=${local.module_vars.redis_range_description}", "REDIS_NAME=${local.module_vars.redis_name}", "REDIS_SIZE=${local.module_vars.redis_size}", "REDIS_ZONE=${local.module_vars.redis_zone}", "REDIS_ALT_ZONE=${local.module_vars.redis_alt_zone}", "REDIS_VERSION=${local.module_vars.redis_version}", "bash", "../../../scripts/create-redis-instance.sh"]
}
The script is like this:
echo "[+]Creating IP Allocation Automatically using <$REDIS_VPC-network\/$REDIS_PREFIX_LENGHT>"
gcloud compute addresses create $REDIS_RESERVED_RANGE_NAME \
--global \
--purpose=VPC_PEERING \
--prefix-lenght=$REDIS_PREFIX_LENGHT \
--description=$REDIS_RANGE_DESCRIPTION \
--network=$REDIS_VPC
The error I get is:
terragrunt apply
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
ERRO[0002] Hit multiple errors:
Hit multiple errors:
exec: "REDIS_REGION=us-east1": executable file not found in $PATH
ERRO[0002] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
I encountered the same issue and resigned myself to pass the values as parameters instead of environment variables.
It involves to modify the script and is a far less clearer declaration, but it works :|

Getting value of a variable value in azure pipeline

enviornment: 'dev'
acr-login: $(enviornment)-acr-login
acr-secret: $(enviornment)-acr-secret
dev-acr-login and dev-acr-secret are secrets stored in keyvault for acr login and acr secret.
In Pipeline, getting secrets with this task
- task: AzureKeyVault#1
inputs:
azureSubscription: $(connection)
KeyVaultName: $(keyVaultName)
SecretsFilter: '*'
This task will create task variables with name 'dev-acr-login' and 'dev-acr-secret'
Not if I want to login in docker I am not able to do that
Following code works and I am able to login into acr.
- bash: |
echo $(dev-acr-secret) | docker login \
$(acrName) \
-u $(dev-acr-login) \
--password-stdin
displayName: 'docker login'
Following doesnot work. Is there a way that I can use variable names $(acr-login) and $(acr-secret) rather than actual keys from keyvault?
- bash: |
echo $(echo $(acr-secret)) | docker login \
$(acrRegistryServerFullName) \
-u $(echo $(acr-login)) \
--password-stdin
displayName: 'docker login'
You could pass them as environment variables:
- bash: |
echo $(echo $ACR_SECRET) | ...
displayName: docker login
env:
ACR_SECRET: $(acr-secret)
But what is the purpose, as opposed to just echoing the password values as you said works in the other example? As long as the task is creating secure variables, they will be protected in logs. You'd need to do that anyway, since they would otherwise show up in diagnostic logs if someone enabled diagnostics, which anyone can do.
An example to do that:
- bash: |
echo "##vso[task.setvariable variable=acr-login;issecret=true;]$ACR_SECRET"
env:
ACR_SECRET: $($(acr-secret)) # Should expand recursively
See Define variables : Set secret variables for more information and examples.

Snakemake conda env parameter is not taken from config.yaml file

I use a conda env that I create manually, not automatically using Snakemake. I do this to keep tighter version control.
Anyway, in my config.yaml I have the following line:
conda_env: '/rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake'
Then, at the start of my Snakefile I read that variable (reading variables from config in your shell part does not seem to work, am I right?):
conda_env = config['conda_env']
Then in a shell part I hail said parameter like this:
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
threads: 8
shell:
'''
#!/bin/bash
source activate {conda_env}
rsem-calculate-expression \
--paired-end \
{input} \
{rsem_ref_base} \
{analyzed_dir}/{wildcards.sample} \
--strandedness reverse \
--num-threads {threads} \
--star \
--star-gzipped-read-file \
--star-output-genome-bam
'''
Notice the {conda_env}. Now this gives me the following error:
Could not find conda environment: None
You can list all discoverable environments with `conda info --envs`.
Now, if I change {conda_env} for its parameter directly /rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake, it does work! I don't have any trouble reading other parameters using this method (like rsem_ref_base and analyzed_dir in the example rule above.
What could be wrong here?
Highest regards,
Freek.
The pattern I use is to load variables into params, so something along the lines of
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
params:
conda_env=config['conda_env']
threads: 8
shell:
'''
#!/bin/bash
source activate {params.conda_env}
rsem-calculate-expression \
...
'''
Although, I'd also never do this with a conda environment, because Snakemake has conda environment management built-in. See this section in the docs on Integrated Package Management for details. This makes reproducibility much more manageable.

Resources