Using a different service account in Cloud Build - google-cloud-build

I can't use the preview feature to add a different service account, but I do have the physical key (.json). I've uploaded this key to Secrets Manager and I intend to call it in during the build.
Is what I have done correct?
steps:
- id: 'Setup Credentials'
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: '/bin/bash'
secretEnv: ['SERVICE_ACCOUNT']
args:
- '-c'
- |
echo "$$SERVICE_ACCOUNT" >> /credentials/service_account.json
volumes:
- name: 'credentials'
path: /credentials
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/service-account-key/versions/latest
env: 'SERVICE_ACCOUNT'
Then in a step that needs to use it I am overwriting GOOGLE_APPLICATION_CREDENTIALS:
- id: 'Do stuff as other service account'
name: 'hashicorp/terraform'
entrypoint: '/bin/bash'
args:
- '-c'
- |
GOOGLE_APPLICATION_CREDENTIALS=/credentials/service_account.json
# do things here
# terraform plan
volumes:
- name: 'credentials'
path: /credentials
Ideally we would use this cloud builds service account but they already have too much going with the other one.

Related

GitHub Action workflow not being interpreted upon merge

I'm attempting to create a GHA workflow and I am getting an error that I'm unsure how to fix as I've implemented this in similar environments before.
name: Deploy Staging
# Controls when the workflow will run
on:
# Triggers the workflow on push events only for the main branch
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# Run the build job first
build:
name: Build
uses: ./.github/workflows/build.yml
deploy-staging:
name: Staging Deploy
runs-on: ubuntu-latest
environment:
name: staging
needs: [build]
permissions:
id-token: write
contents: read
steps:
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Download build artifacts
uses: actions/download-artifact#v3
with:
name: buildResult
- name: CDK install
run: npm install -g aws-cdk
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
- name: CDK diff
run: cdk --app . diff staging
- name: CDK deploy
run: cdk --app . deploy staging --require-approval never
- name: Configure DX AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Report deployment
uses: XXXX/deployment-tracker-action#v1
if: always()
with:
application-name: XXXX
environment: staging
platform: test
deployment-status: ${{ steps.deploy-workload.outcome == 'success' && 'success' || 'fail' }}
aws-region: us-east-1
XXXX
I don't understand quite where I'm going wrong here but when I merged my actions branch and I attempted to get it to work, I received the following message:
error parsing called workflow "./.github/workflows/build.yml": workflow is not reusable as it is missing a `on.workflow_call` trigger
Below is my build file for reference.
name: Build
# Controls when the workflow will run
on:
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
buildEnvironment:
description: Build Environment
required: false
default: production
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# next build runs lint, don't need a step for it
build:
name: Build
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Install Dependencies
run: npm install
- name: CDK install
run: npm install -g aws-cdk
- name: CDK build
run: cdk synth
- name: Upload build artifacts
uses: actions/upload-artifact#v3
with:
name: buildResult
path: |
cdk.out
test:
name: Test
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Install Dependencies
run: npm install
- name: Run tests
run: npm test
If you want to call another workflow (reusable workflow), the workflow you're calling needs to have the trigger workflow_call.
Therefore, in order to resolve your error, change build.yml to:
name: Build
on:
workflow_call:
pull_request:
# etc..

Azure pipeline base template inserting name key in environment

I have a base template that accepts a stageList parameter. I don't do anything with the jobs in those stages:
parameters:
- name: stages
type: stageList
default: []
stages:
- ${{ parameters.stages }}
I'm passing into that a stage that contains a deployment job. I have hardcoded the environment for testing purposes, but even so it inserts the key "name: environment" under environment:
resources:
repositories:
- repository: templates
type: git
name: basePipelineTemplatesHost/basePipelineTemplatesHost
extends:
template: templateExtendedByDeployment/template.yml#templates
parameters:
stages:
- stage: buildStage1
jobs:
- deployment:
displayName: Deploy to demo environment
environment: DTL-Demo-Env
strategy:
runOnce:
deploy:
steps:
- script: echo test
Resulting in the following rendered yaml:
environment: {
name: DTL-Demo-Env
}
This causes the job to run on a hosted vm instead of my on-prem environment agent. Is this a bug?
Just a suggestion, you should add resourceType under environment.
jobs:
- deployment:
displayName: Deploy to demo environment
environment:
name: DTL-Demo-Env
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: echo test
If not, the new created environment will always be created under hosted agent when you use your private agent. You should add it to let the environment variable under your private agent.

how to access image content on kubernetes container init

I have an image which contains data inside /usr/data/webroot. This data should be moved on container init to /var/www/html.
Now I stumbeled upon InitContainers. As I understand it, it can be used to execute tasks on container initialization.
But I don't know if the task is beeing excuted after the amo-magento pods are created, or if the init task runs, and after that the magento pods are created.
I suppose that the container with the magento image is not available when the initContainers task runs, so no content is available to move to the new directory.
apiVersion: apps/v1
kind: Deployment
metadata:
name: amo-magento
labels:
app: amo-magento
spec:
replicas: 1
selector:
matchLabels:
app: amo-magento
template:
metadata:
labels:
app: amo-magento
tier: frontend
spec:
initContainers:
- name: setup-magento
image: busybox:1.28
command: ["sh", "-c", "mv -r /magento/* /www"]
volumeMounts:
- mountPath: /www
name: pvc-www
- mountPath: /magento
name: magento-src
containers:
- name: amo-magento
image: amo-magento:0.7 # add google gcr.io path after upload
imagePullPolicy: Never
volumeMounts:
- name: install-sh
mountPath: /tmp/install.sh
subPath: install.sh
- name: mage-autoinstall
mountPath: /tmp/mage-autoinstall.sh
subPath: mage-autoinstall.sh
- name: pvc-www
mountPath: /var/www/html
- name: magento-src
mountPath: /usr/data/webroot
# maybe as secret - can be used as configMap because it has not to be writable
- name: auth-json
mountPath: /var/www/html/auth.json
subPath: auth.json
- name: php-ini-prod
mountPath: /usr/local/etc/php/php.ini
subPath: php.ini
# - name: php-memory-limit
# mountPath: /usr/local/etc/php/conf.d/memory-limit.ini
# subPath: memory-limit.ini
volumes:
- name: magento-src
emptyDir: {}
- name: pvc-www
persistentVolumeClaim:
claimName: magento2-volumeclaim
- name: install-sh
configMap:
name: install-sh
# kubectl create configmap mage-autoinstall --from-file=build/docker/mage-autoinstall.sh
- name: mage-autoinstall
configMap:
name: mage-autoinstall
- name: auth-json
configMap:
name: auth-json
- name: php-ini-prod
configMap:
name: php-ini-prod
# - name: php-memory-limit
# configMap:
# name: php-memory-limit
But I don't know if the task is beeing excuted after the amo-magento pods are created, or if the init task runs, and after that the magento pods are created.
For sure the latter, that's why you are able to specify an entirely different image: for your initContainers: task -- they are related to one another only in that they run on the same Node and, as you have seen, share volumes. Well, I said "for sure" but you have a slight misnomer: after that the magneto containers are created -- the Pod is the collection of every colocated container, initContainers: and container: containers
If I understand your question, the fix to your Deployment is just to update the image: in your initContainer: to be the one which contains the magic /usr/data/webroot and then update your shell command to reference the correct path inside that image:
initContainers:
- name: setup-magento
image: your-magic-image:its-magic-tag
command: ["sh", "-c", "mv -r /usr/data/webroot/* /www"]
volumeMounts:
- mountPath: /www
name: pvc-www
# but **removing** the reference to the emptyDir volume
and then by the time the container[0] starts up, the PVC will contain the data you expect
That said, I am actually pretty sure that you want to remove the PVC from this story, since -- by definition -- it is persistent across Pod reboots and thus will only accumulate files over time (since your sh command does not currently clean up /www before moving files there). If you replaced all those pvc references to emptyDir: {} references, then those directories would always be "fresh" and would always contain just the content from the tagged image declared in your initContainer:

How to disable Molecule idempotence check on Ansible role test?

Using Molecule v.2 to test Ansible roles, I faced an issue with the check for a role to be idempotent.
How can I disable this check?
As documented, Molecule configuration parameters are required to be set in molecule.yml file, but I could not find how to disable idempotence check.
---
# molecule.yml file
dependency:
name: galaxy
driver:
name: docker
lint:
name: ansible-lint
options:
x: ANSIBLE0006,ANSIBLE0010,ANSIBLE0012,ANSIBLE0013
platforms:
- name: mongo01
image: mongo:3.2
privileged: yes
groups:
- mongodb
- mongodb_master
- name: mysql_server
image: mysql
environment:
MYSQL_ROOT_PASSWORD: some_password
groups:
- mysql
- name: elasticsearch
image: molecule_local/centos:6
command: sleep infinity
dockerfile: Dockerfile
privileged: yes
groups:
- elastic
- name: esb
image: molecule_local/centos:6
command: sleep infinity
dockerfile: Dockerfile
links:
- "elasticsearch-default:elasticsearch elasticsearch01"
- "mongo01-default:mongo mongo_b2b mongo01"
- "mysql_server-default:mysql mysql_server"
groups:
- fabric
provisioner:
name: ansible
config_options:
defaults:
vault_password_file: /path/to/vault/file
diff: yes
scenario:
name: default
# Probably something like below should disable idempotency check.
idempotent: false
# Uncomment when developing locally to
# keep instances running when tests are completed.
# Must be kept commented when building on CI/CD.
# test_sequence:
# - destroy
# - create
# - converge
# - lint
# - verify
verifier:
name: testinfra
I want to get rid of idempotency check altogether and rely on my own tests.
You should uncomment the test_sequence and include only the tests you want, for example:
test_sequence:
- destroy
- create
- converge
# - idempotence
- lint
- verify

Pull from multiple SCM then mv file in Concourse CI to workdir

I've been banging my head on this one for quite some time and I cannot figure it out (I know it must be a simple thing to do though).
Currently, what I'm trying to do is pulling from two repositories (which naturally creates two separate directories) then I'm trying to move files from one directory to the other to successfully execute the Dockerfile.
Here's how my pipeline.yml file looks like:
---
jobs:
- name: build-nexus-docker-image
public: false
plan:
- get: git-nexus-docker-images
trigger: true
- get: git-nexus-license
trigger: true
- task: mv-nexus-license
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: "trusty"}
inputs:
- name: git-nexus-license
- name: git-nexus-docker-images
run:
path: /bin/sh
args:
- -c
- mv -v git-nexus-license/nexus.lic git-nexus-docker-images/nexus.lic; ls -la git-nexus-docker-images
- put: nexus-docker-image
params:
build: git-nexus-docker-images/
resources:
- name: git-nexus-docker-images
type: git
source:
uri: git#git.company.com:dev/nexus-pro-dockerfile.git
branch: test
paths: [Dockerfile]
private_key: {{git_ci_key}}
- name: git-nexus-license
type: git
source:
uri: git#git.company.com:secrets/nexus-information.git
branch: master
paths: [nexus.lic]
private_key: {{git_ci_key}}
- name: nexus-docker-image
type: docker-image
source:
username: {{aws-token-username}}
password: {{aws-token-password}}
repository: {{ecr-nexus-repo}}
I've posted the pipeline that actually can be deployed to Concourse; however I tried a lot of things, but I can't figure out how to do this. I'm stuck on the part of moving the license file from git-nexus-license directory to git-nexus-docker-images directory. What I've done doesn't seem to mv the nexus.lic file because when while building the docker image it fails because it cannot find that file in the directory.
EDIT: I've successfully been able to "mv" nexus.lic using the code above, however the build is still failing due to not finding the file! I'm not sure what I'm doing wrong, the build works properly if I do it manually but with Concourse it's failing.
Okay so I figured out what I was doing wrong and as usual it was something small. I forgot to add the outputs to the yml file which tells concourse that this is the new workdir. Here's how it looks like now (which works for me):
---
jobs:
- name: build-nexus-docker-image
public: false
plan:
- get: git-nexus-docker-images
trigger: true
- get: git-nexus-license
trigger: true
- task: mv-nexus-license
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: "trusty"}
inputs:
- name: git-nexus-license
- name: git-nexus-docker-images
outputs:
- name: build-nexus-dir
run:
path: /bin/sh
args:
- -c
- mv -v git-nexus-license/nexus.lic build-nexus-dir/nexus.lic; mv -v git-nexus-docker-images/* build-nexus-dir; ls -la build-nexus-dir;
- put: nexus-docker-image
params:
build: build-nexus-dir/
resources:
- name: git-nexus-docker-images
type: git
source:
uri: git#git.company.com:dev/nexus-pro-dockerfile.git
branch: test
paths: [Dockerfile]
private_key: {{git_ci_key}}
- name: git-nexus-license
type: git
source:
uri: git#git.company.com:secrets/nexus-information.git
branch: master
paths: [nexus.lic]
private_key: {{git_ci_key}}
- name: nexus-docker-image
type: docker-image
source:
username: {{aws-token-username}}
password: {{aws-token-password}}
repository: {{ecr-nexus-repo}}
I hope this helps whoever gets stuck on this. :)

Resources