Google Cloud Build non-shell secret substitution - google-cloud-build

I am deploying a cloud function with secret env variables that I would like to add to the function. From the example it mentions that that "# Note: You need a shell to resolve environment variables with $$".
I am using the gcloud builder, which seems to not be a shell so my environment variables are just $USER and $PASS literally without substitution. I've tried $USER and ${USER} as well but it complains about not having valid substitutions.
How do I get my secrets from the google cloud build environment into my google cloud function environment?
The first step is to verify that my KMS stuff is working, which it appears to be.
- name: 'ubuntu'
args: ['printenv']
secretEnv: ['USER','PASS']
- name: "gcr.io/cloud-builders/gcloud"
args:
[
"functions",
"deploy",
"fname",
"--trigger-http",
"--runtime=nodejs10",
"--service-account=functions-secrets#northpoint-production.iam.gserviceaccount.com",
"--set-env-vars=USER=$$USER,PASS=$$PASS",
"--entry-point=fname",
"--project=[project]",
]
dir: "functions"
secretEnv: ['USER','PASS']
secrets:
- kmsKeyName: projects/[project]/locations/global/keyRings/[ring]/cryptoKeys/[key]
secretEnv:
USER: CiQAML6I...
PASS: CiQAML6IGO3wO...

You are missing the entrypoint: 'bash' part per the documentation you shared.
steps:
- name: "gcr.io/cloud-builders/gcloud"
entrypoint: "bash"
args:
[
"-c",
"gcloud functions deploy fname --trigger-http --runtime=nodejs10 --service-account=[account] --set-env-vars=USER=$$USER,PASS=$$PASS --entry-point=fname --project=[project]",
]
dir: "functions"
secretEnv: ["USER", "PASS"]
secrets:
- kmsKeyName: projects/[project]/locations/global/keyRings/[ring]/cryptoKeys/[key]
secretEnv:
USER: CiQAML...
PASS: CiQAML...

Related

YAML Syntax to get the value of an Env variable defined in context to a specific Environment in Git Actions

Under my Git Repo Settings > Environments : I define an env name PPM_DEV, under this env PPM_DEV I define an environment variable named HOSTNAME and give it a value in the git configurations page.
Now what is the YAML syntax under GitActions WF to read the value of this variable ?
basically I may have 2 Env's defined PPM_DEV , PPM_TEST
but where do I set the Env context to pull the variable HOSTNAME from the PPM_DEV env ?
In the example below , I am trying to populate a variable VARHOSTNAME with the value of the Env variable HOSTNAME that is pre-defined against the Env named PPM_DEV
However it fails with
The workflow is not valid. .github/workflows/Ext_Conn_v2.yml (Line: 10, Col: 7): Unexpected value 'VARHOSTNAME'
name: Ext_Conn_v2
on:
workflow_dispatch:
jobs:
RunonVM:
runs-on: ubuntu-latest
environment:
name: ppm_dev
VARHOSTNAME: ${{ env.HOSTNAME }}
steps:
- name: Run a command
run: |
echo "This workflow was manually triggered."
echo "value of the variable HOSTNAME: " ${VARHOSTNAME}
pwd
echo "end of run"
You need to use jobs.<job_id>.steps[*].env to specify the environment variables or secrets using vars or secrets contexts:
Here's an example with the secrets context:
jobs:
job:
runs-on: ubuntu-latest
environment: ppm_dev
steps:
- name: Command
env:
HOSTNAME: ${{ secrets.HOSTNAME }}
run: |
echo "HOSTNAME: $HOSTNAME"
See a linted example here.

GitHub Actions how to use stdout from bashfile

I have the following GitHub action yml file that comments on the PR with stdout. However, I have noticed when using a bash script bash.sh, the comment is empty. Inside bash.sh it runs a series of commands that generates output. Is there any way to get the output from bash.sh?
- name: "Run bash"
id: plan
run: ./bash.sh
- name: "Comment PR"
uses: actions/github-script#0.9.0
env:
STDOUT: "```${{ steps.plan.outputs.stdout }}```"
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: process.env.STDOUT
})

GitHub Actions pass list of variables to shell script

Using GitHub Actions, I would like to invoke a shell script with a list of directories.
(Essentially equivalent to passing an Ansible vars list to the shell script)
I don't really know how, is this even possible? Here's what I have until now, how could one improve this?
name: CI
on:
push:
branches:
- master
tags:
- v*
pull_request:
jobs:
run-script:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout#v2
- name: Run script on targets
run: ./.github/workflows/script.sh {{ targets }}
env:
targets:
- FolderA/SubfolderA/
- FolderB/SubfolderB/
Today I was able to do this with the following YAML (truncated):
...
with:
targets: |
FolderA/SubfolderA
FolderB/SubfolderB
The actual GitHub Action passes this as an argument like the following:
runs:
using: docker
image: Dockerfile
args:
- "${{ inputs.targets }}"
What this does is simply sends the parameters as a string with the newline characters embedded, which can then be iterated over similar to an array in a POSIX-compliant manner via the following shell code:
#!/bin/sh -l
targets="${1}"
for target in $targets
do
echo "Proof that this code works: $target"
done
Which should be capable of accomplishing your desired task, if I understand the question correctly. You can always run something like sh ./script.sh $target in the loop if your use case requires it.

How do I set multiple commands in config.yaml in kubernetes?

I tried to use the accepted answer from this question, but I see the following error when I apply it:
containers:
- name: service
command: ["/bin/sh", "-c"]
args: ["echo 123; my_executable_binary"]
/bin/sh: 0: -c requires an argument
Please try like this:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
command: ["/bin/sh", "-c"]
args:
- source /env/db_cred.env;
application-command;

How to set multiple commands in one yaml file with Kubernetes?

In this official document, it can run command in a yaml config file:
https://kubernetes.io/docs/tasks/configure-pod-container/
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
If I want to run more than one command, how to do?
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded.
Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.
My preference is to multiline the args, this is simplest and easiest to read. Also, the script can be changed without affecting the image, just need to restart the pod. For example, for a mysql dump, the container spec could be something like this:
containers:
- name: mysqldump
image: mysql
command: ["/bin/sh", "-c"]
args:
- echo starting;
ls -la /backups;
mysqldump --host=... -r /backups/file.sql db_name;
ls -la /backups;
echo done;
volumeMounts:
- ...
The reason this works is that yaml actually concatenates all the lines after the "-" into one, and sh runs one long string "echo starting; ls... ; echo done;".
If you're willing to use a Volume and a ConfigMap, you can mount ConfigMap data as a script, and then run that script:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
entrypoint.sh: |-
#!/bin/bash
echo "Do this"
echo "Do that"
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: "ubuntu:14.04"
command:
- /bin/entrypoint.sh
volumeMounts:
- name: configmap-volume
mountPath: /bin/entrypoint.sh
readOnly: true
subPath: entrypoint.sh
volumes:
- name: configmap-volume
configMap:
defaultMode: 0700
name: my-configmap
This cleans up your pod spec a little and allows for more complex scripting.
$ kubectl logs my-pod
Do this
Do that
If you want to avoid concatenating all commands into a single command with ; or && you can also get true multi-line scripts using a heredoc:
command:
- sh
- "-c"
- |
/bin/bash <<'EOF'
# Normal script content possible here
echo "Hello world"
ls -l
exit 123
EOF
This is handy for running existing bash scripts, but has the downside of requiring both an inner and an outer shell instance for setting up the heredoc.
I am not sure if the question is still active but due to the fact that I did not find the solution in the above answers I decided to write it down.
I use the following approach:
readinessProbe:
exec:
command:
- sh
- -c
- |
command1
command2 && command3
I know my example is related to readinessProbe, livenessProbe, etc. but suspect the same case is for the container commands. This provides flexibility as it mirrors a standard script writing in Bash.
IMHO the best option is to use YAML's native block scalars. Specifically in this case, the folded style block.
By invoking sh -c you can pass arguments to your container as commands, but if you want to elegantly separate them with newlines, you'd want to use the folded style block, so that YAML will know to convert newlines to whitespaces, effectively concatenating the commands.
A full working example:
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: busy
image: busybox:1.28
command: ["/bin/sh", "-c"]
args:
- >
command_1 &&
command_2 &&
...
command_n
Here is my successful run
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- command:
- /bin/sh
- -c
- |
echo "running below scripts"
i=0;
while true;
do
echo "$i: $(date)";
i=$((i+1));
sleep 1;
done
name: busybox
image: busybox
Here is one more way to do it, with output logging.
apiVersion: v1
kind: Pod
metadata:
labels:
type: test
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: log-vol
mountPath: /var/mylog
command:
- /bin/sh
- -c
- >
i=0;
while [ $i -lt 100 ];
do
echo "hello $i";
echo "$i : $(date)" >> /var/mylog/1.log;
echo "$(date)" >> /var/mylog/2.log;
i=$((i+1));
sleep 1;
done
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: log-vol
emptyDir: {}
Here is another way to run multi line commands.
apiVersion: batch/v1
kind: Job
metadata:
name: multiline
spec:
template:
spec:
containers:
- command:
- /bin/bash
- -exc
- |
set +x
echo "running below scripts"
if [[ -f "if-condition.sh" ]]; then
echo "Running if success"
else
echo "Running if failed"
fi
name: ubuntu
image: ubuntu
restartPolicy: Never
backoffLimit: 1
Just to bring another possible option, secrets can be used as they are presented to the pod as volumes:
Secret example:
apiVersion: v1
kind: Secret
metadata:
name: secret-script
type: Opaque
data:
script_text: <<your script in b64>>
Yaml extract:
....
containers:
- name: container-name
image: image-name
command: ["/bin/bash", "/your_script.sh"]
volumeMounts:
- name: vsecret-script
mountPath: /your_script.sh
subPath: script_text
....
volumes:
- name: vsecret-script
secret:
secretName: secret-script
I know many will argue this is not what secrets must be used for, but it is an option.

Resources