Passing secrets as output between jobs in a Github workflow - bash

I am trying to pass a JWT token in between jobs but something prevents it to be passed correctly. According to the docs, if I want to pass variables between jobs I need to use outputs as explained here. What I am doing is the following:
name: CI
on:
pull_request:
branches:
- main
jobs:
get-service-url:
...does something not interesting to us...
get-auth-token:
runs-on: ubuntu-latest
outputs:
API_TOKEN: ${{ steps.getauthtoken.outputs.API_TOKEN }}
steps:
- name: Get Token
id: getauthtoken
run: |
API_TOKEN:<there is a full JWT token here>
echo -n "API_TOKEN=$API_TOKEN" >> $GITHUB_OUTPUT
use-token:
runs-on: ubuntu-latest
needs: [get-service-url,get-auth-token]
name: Run Tests
steps:
- uses: actions/checkout#v3
- name: Run tests
run: |
newman run ${{ github.workspace }}/tests/collections/my_collection.json --env-var "service_url=${{needs.get-service-url.outputs.service_URL}}" --env-var "auth_token=${{needs.get-auth-token.outputs.API_TOKEN}}"
So, during a run, in my output I see:
Run newman run /home/runner/work/my-repo/my-repo/tests/collections/my_collection.json --env-var "service_url=https://test.net" --env-var "auth_token="
At first I thought there was something wrong in passing the token itself between jobs. Hence I tried
to put a dummy token an export it in the output. In my get-auth-token job, the call to output it became:
echo -n "API_TOKEN=test" >> $GITHUB_OUTPUT
and in the log I saw it there:
--env-var "auth_token=test"
so the way I am passing it intra jobs is fine. Moreover, the token is there and is correct because I hard coded one to simplify my tests. Indeed if in my get-auth-token job I try to echo $API_TOKEN I see in the logs *** which makes me understand Github is correctly obfuscating it.
I then tried not to pass it in between jobs. So I created the same token, hardcoded, right before the newman run command and referenced it in the newman run directly and tada! The log now is:
Run newman run /home/runner/work/my-repo/my-repo/tests/collections/my_collection.json --env-var "service_url=https://test.net" --env-var "auth_token=***"
So the token is there! But I need it to be coming from another job. There is something preventing the token to be passed in between jobs and I don't know how to achieve that.

Found out a trick to make this happen. Consists on temporarily "obfuscating" the secret to the eyes of Github.
In the job where I retrieve the secret I encode it and export it to GITHUB_OUTPUT:
API_TOKEN_BASE64=`echo -n <my_secret> | base64 -w 0`
echo -n "API_TOKEN=$API_TOKEN_BASE64" >> $GITHUB_OUTPUT
In the job where I need the secret I decode it (and use where needed):
API_TOKEN=`echo -n ${{needs.get-auth-token.outputs.API_TOKEN}} | base64 --decode`

Related

Get github.rest.issues.createComment() to use an environment variable for multi-line comment

I have been trying to understand how to get a multi-line comment written to a PR using github actions. I was trying to use github.rest.issues.createComment() as shown in Commenting a pull request... and then handling the multi-line issue by using an environmement variable as shown here: workflow commands. Ultimate goal is to take some multi-line output stdout from a python script (or a log file) and place that as a comment back to the PR that the workflow is running on. The yml file below runs fine up until the last step where I try to access the environment variable I created and use it as the body of the createComment(). The environment variable is created and appears to be available but fails when I try to use it for the body of the comment. Error from github actions is below the code. If I add quotes like body: "${{env.SCRIPT_OUTPUT}}" then I get same error. I would like to use createComment() if possible, I know there is a create comment from Peter Evans that I will likely try next but trying to understand why this is not working.
name: GitHub Actions Test Multi-line
on:
pull_request:
branches:
- Dev
jobs:
Run-check-references:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
fetch-depth: 0
- run: |
SCRIPT_OUTPUT=$(cat << EOF
first line
second line
third line
EOF
)
echo "SCRIPT_OUTPUT<<EOF" >> $GITHUB_ENV
echo "$SCRIPT_OUTPUT" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- run: |
echo "${{env.SCRIPT_OUTPUT}}"
echo $SCRIPT_OUTPUT
- uses: actions/github-script#v5
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: ${{env.SCRIPT_OUTPUT}}
})
Run actions/github-script#v5
SyntaxError: Invalid or unexpected token
at new AsyncFunction (<anonymous>)
at callAsyncFunction (/home/runner/work/_actions/actions/github-script/v5/dist/index.js:4706:56)
at main (/home/runner/work/_actions/actions/github-script/v5/dist/index.js:4761:26)
at Module.272 (/home/runner/work/_actions/actions/github-script/v5/dist/index.js:4745:1)
at __webpack_require__ (/home/runner/work/_actions/actions/github-script/v5/dist/index.js:24:31)
at startup (/home/runner/work/_actions/actions/github-script/v5/dist/index.js:43:19)
at /home/runner/work/_actions/actions/github-script/v5/dist/index.js:49:18
at Object.<anonymous> (/home/runner/work/_actions/actions/github-script/v5/dist/index.js:52:10)
at Module._compile (internal/modules/cjs/loader.js:959:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:995:10)
Error: Unhandled error: SyntaxError: Invalid or unexpected token
The suggestion from #riqq to use back tics solved the issue. So just had to change body: ${{env.SCRIPT_OUTPUT}} to:
body: `${{env.SCRIPT_OUTPUT}}`

How do I assign exe output to a variable in gitlab ci scripts?

When running my gitlab ci I need to check whether a specified svn directory exists.
I was using the script:
variables:
DIR_CHECK: "default"
stages:
- setup
- test
- otherDebugJob
.csharp:
only:
changes:
- "**/*.cs"
- "**/*.js"
setup:
script:
- $DIR_CHECK = $(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- echo $DIR_CHECK
test:
script:
- echo "DIR_CHECK is blank"
- echo $DIR_CHECK
rules:
- if: $DIR_CHECK == ''
otherDebugJob:
script:
- echo "DIR_CHECK is not blank"
- echo $DIR_CHECK
rules:
- if: $DIR_CHECK != ''
the svn command works and echos back the correct reply but $DIR_CHECK does not get set to anything but the original default. It does not store the returned string from the svn command.
How do I store the returned string from an exe in a variable in gitlab ci?
Test run:
Executing "step_script" stage of the job script 00:00 $ $DIR_CHECK =
$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal
--depth empty) svn: E170000: Illegal repository URL https://server.fsl.local:port/svn/myco/personal/TestNotReal' $ echo
$DIR_CHECK Cleaning up file based variables 00:01 Job succeeded
Passing variables between jobs
Unfortunately, you cannot use DIR_CHECK variable the way you described. List of steps to be executed generates before steps actually runs, that means for all of the steps DIR_CHECK will be equal to default. First of all there are few tips how you can pass variables between jobs:
First way
You can add desired command to the before_script section in your .csharp template:
.csharp:
before_script:
- export DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
and extend other steps with this .csharp.
Second way
You can pass variables between jobs with job artifacts:
setup:
stage: setup
script:
- DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- echo "DIR_CHECK=$DIR_CHECK" > dotenv_file
artifacts:
reports:
dotenv:
- dotenv_file
Thirds way
You can trigger or use parent/child pipelines to pass variables into pipelines.
staging:
variables:
DIR_CHECK: "you are awesome, guys!"
stage: deploy
trigger: my/deployment
In the triggered pipeline your variable will exists at the very start moment, and all the rules will be applied correctly.
Solution
In your case, if you really don't want to include otherDebugJob step in your pipeline you can do the following:
First approach
This is quite easy way and this will work, but looks like not a best practice. So, we are already know how to pass our DIR_CHECK variable from setup step , just add some check in the test step script block:
script:
- |
if [ -z "$DIR_CHECK" ]; then
exit 0
fi
- echo "DIR_CHECK is blank"
- echo $DIR_CHECK
Do the almost same thing for the otherDebugJob but check if DIR_CHECK is not empty with if [ -n "$DIR_CHECK" ].
This approach is helpful when your pipeline not contains a lot of steps, but after the test and otherDebugJob follows another few steps.
Second approach
You can fail your setup step and then handle this fail in otherDebugJob step:
setup:
script:
- DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- |
if [ -z "$DIR_CHECK" ]; then
exit 1
fi
otherDebugJob:
script:
- echo "DIR_CHECK is not blank"
when: on_failure
This approach is useful if you only want to make some debug stuff after this setup step. After all on_failure jobs, pipeline will be marked as Failed and stopped.

Getting value of a variable value in azure pipeline

enviornment: 'dev'
acr-login: $(enviornment)-acr-login
acr-secret: $(enviornment)-acr-secret
dev-acr-login and dev-acr-secret are secrets stored in keyvault for acr login and acr secret.
In Pipeline, getting secrets with this task
- task: AzureKeyVault#1
inputs:
azureSubscription: $(connection)
KeyVaultName: $(keyVaultName)
SecretsFilter: '*'
This task will create task variables with name 'dev-acr-login' and 'dev-acr-secret'
Not if I want to login in docker I am not able to do that
Following code works and I am able to login into acr.
- bash: |
echo $(dev-acr-secret) | docker login \
$(acrName) \
-u $(dev-acr-login) \
--password-stdin
displayName: 'docker login'
Following doesnot work. Is there a way that I can use variable names $(acr-login) and $(acr-secret) rather than actual keys from keyvault?
- bash: |
echo $(echo $(acr-secret)) | docker login \
$(acrRegistryServerFullName) \
-u $(echo $(acr-login)) \
--password-stdin
displayName: 'docker login'
You could pass them as environment variables:
- bash: |
echo $(echo $ACR_SECRET) | ...
displayName: docker login
env:
ACR_SECRET: $(acr-secret)
But what is the purpose, as opposed to just echoing the password values as you said works in the other example? As long as the task is creating secure variables, they will be protected in logs. You'd need to do that anyway, since they would otherwise show up in diagnostic logs if someone enabled diagnostics, which anyone can do.
An example to do that:
- bash: |
echo "##vso[task.setvariable variable=acr-login;issecret=true;]$ACR_SECRET"
env:
ACR_SECRET: $($(acr-secret)) # Should expand recursively
See Define variables : Set secret variables for more information and examples.

Gitlab run job either by trigger or changes

I am trying to trigger a particular job in CI after either of the 2 conditions
trigger by another job in the same pipeline
OR
changes: somefile.txt
My CI is as described
job1:
stage: build
script:
- echo "JOb1"
- curl -X POST -F token=2342344444 -F "variables[TRIGGER_JOB]=job1" -F ref=master https://main.gitlab.myconmpanyxyz.com/api/v4/projects/1234/trigger/pipeline
only:
changes:
- job1.md
job2: # This does not RUN as expected because of the TRIGGER_JOB set to job1
stage: test
script:
- echo "Job2"
rules:
- if: $TRIGGER_JOB =="job2"
job3: # this RUNS as expected because of VARIABLE TRIGGER_JOB
stage: test
script:
- echo "Job3"
rules:
- if: $TRIGGER_JOB =="job1"
job4: # this also RUNS, but this should not be the expected behavior
stage: test
script:
- echo “job4“
rules:
- if: $TRIGGER_JOB == "xyz"
- changes:
- job4.md
After job1 finishes it also needs to call job4 and not any other jobs (job2 in this case). So I am using CURL to call the job itself. If there are any better ways of calling a specific job in the same CI, also please let me know.
I have already seen this stack-overflow page, but it does not help because my job needs to be triggered by either of 2 conditions which is not allowed bit gitlab-ci.
I need job4 to be called by either of the 2 conditions - if the TRIGGER_JOB=="job1" or if there are any changes in job4.md file.
Currently job4 runs if changes are made in job4.md file, however it also runs if the job1 is triggered. But afaik this should not be the expected behavior.
docs. Can anyone please give me some leads how to create this kind of design.
Your solution was almost correct, but the changes keyword with only or except does only work, if the pipeline is triggered by a push or a merge_request event. This is defined in the variable CI_PIPELINE_SOURCE. When you trigger the pipeline by calling the API, the variable CI_PIPELINE_SOURCE contains the value trigger and therefore only:changes returns always true, which triggers job1 again and ends in an endless loop. You can add a simple except rule to your job1 to prevent that:
job1:
stage: build
script:
- echo "JOb1"
- curl -X POST -F token=2342344444 -F "variables[TRIGGER_JOB]=job1" -F ref=master https://main.gitlab.myconmpanyxyz.com/api/v4/projects/1234/trigger/pipeline
only:
changes:
- job1.md
except:
variables:
- $CI_PIPELINE_SOURCE == "trigger"
You can find more information on only/except:changes in the documentation.

Run Google Cloud Build Command for Each Result of Array

I have an Nx workspace with multiple Angular apps included. When master is updated in my GitHub repo, I want a build to kick off. That part is easy enough with GCB's triggers. But what I want to happen is to run this command:
npm run affected:apps
on the trigger, and build a Docker image and push it to Google Container registry for each affected app. My cloudbuild.yaml file looks like this so far:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: node:10.15.1
entrypoint: npm
args: ['run affected:apps --base=origin/master --head=HEAD']
That command returns a result like this:
> project-name#0.0.0 affected:apps /Users/username/projects/project-folder
> nx affected:apps
Note: Nx defaulted to --base=master --head=HEAD
my-app-1
I'm not sure what to do with Google Cloud with that result. With a node script, I could do the following to print out an array of affected apps:
const { exec } = require('child_process');
function getApps() {
exec('npm run affected:apps', (err, out) => {
if (err) {
console.log(null);
} else {
const lines = out.split('\n');
const apps = lines[lines.length - 2].split(' ');
console.log(JSON.stringify(apps));
}
});
}
getApps();
That returns an array of the affected apps, and null if an error. Still, even with that, I'm not sure what I would do for the next step in Google Cloud build. With the results of either the command or that script, ideally I'd be able to run a docker build command, like this:
docker build --file ./:loop variable:/Dockerfile
where :loop variable: is the name of an affected app. I'd like to do that for each value in the array, and not do anything if, for some reason, the command returns no affected apps.
Any ideas on how to use Google Cloud Build with Nx Workspaces, or if you've just got Google Cloud build experience and know what my next step should be, that'd be great.
Continue #chinoche comment there is an example of how you could save the list of affected apps to the affected.txt file
- name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args:
- '-c'
- |
IFS=' ' read -a apps <<< $(npx nx affected:apps --base=origin/master~1 --plain)
for app in "${apps[#]}"
do
echo $app
done >> affected.txt
The next step could read the file and call any other commands, e.g. create docker image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
docker build -t gcr.io/$PROJECT_ID/$project -f <path-to>/Dockerfile .
done < affected.txt
One of the tricks might be to create separate cloudbuild.yaml file for each project and then trigger a new cloud build process for each affected project. That allows having a completely different build process for each project.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
gcloud builds submit --config ./<path-to>/$project/project-specific-cloudbuild.yaml .
done < affected.txt
if you are able to get the affected apps with the node script I'd suggest you to write a file with the affected apps in a Cloud Build custom steps, this file will be written at the "/workspace" directory and will be able to any other custom step that may be executed in later steps, with this you should be able to run the docker build command

Resources