/busybox/sh: syntax error: bad substitution with Tekton - shell

I'm trying to pull source code from Github then build and push a docker image to docker hub using Tekton pipeline and Knative on Kubernetes cluster.
I'm following this link for the installation and setup of Tekton:
https://www.ibm.com/cloud/blog/build-a-knative-service-with-tekton-and-apache-openwhisk-nodejs-runtime
task-build.yaml
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: task-build
spec:
inputs:
resources:
- name: docker-source
type: git
params:
- name: TARGET_IMAGE_NAME
description: name of the image to be tagged and pushed
- name: TARGET_IMAGE_TAG
description: tag the image before pushing
default: "latest"
- name: DOCKERFILE
description: name of the dockerfile
- name: OW_RUNTIME_DEBUG
description: flag to indicate debug mode should be on/off
default: "false"
- name: OW_RUNTIME_PLATFORM
description: flag to indicate the platform, one of ["openwhisk", "knative", ... ]
default: "knative"
- name: OW_ACTION_NAME
description: name of the action
default: "foo"
- name: OW_ACTION_CODE
description: JavaScript source code to be evaluated
default: ""
- name: OW_ACTION_MAIN
description: name of the function in the "__OW_ACTION_CODE" to call as the action handler
default: "main"
- name: OW_ACTION_BINARY
description: flag to indicate zip function, for zip actions, "__OW_ACTION_CODE" must be base64 encoded string
default: "false"
- name: OW_HTTP_METHODS
description: list of HTTP methods, any combination of [GET, POST, PUT, and DELETE], default is [POST]
default: "[POST]"
- name: OW_ACTION_RAW
description: flag to indicate raw HTTP handling, interpret and process an incoming HTTP body directly
default: "false"
outputs:
resources:
- name: builtImage
type: image
steps:
- name: add-ow-env-to-dockerfile
image: "gcr.io/kaniko-project/executor:debug"
command:
- /busybox/sh
args:
- -c
- |
cat <<EOF >> ${inputs.params.DOCKERFILE}
ENV __OW_RUNTIME_DEBUG "${inputs.params.OW_RUNTIME_DEBUG}"
ENV __OW_RUNTIME_PLATFORM "${inputs.params.OW_RUNTIME_PLATFORM}"
ENV __OW_ACTION_NAME "${inputs.params.OW_ACTION_NAME}"
ENV __OW_ACTION_CODE "${inputs.params.OW_ACTION_CODE}"
ENV __OW_ACTION_MAIN "${inputs.params.OW_ACTION_MAIN}"
ENV __OW_ACTION_BINARY "${inputs.params.OW_ACTION_BINARY}"
ENV __OW_HTTP_METHODS "${inputs.params.OW_HTTP_METHODS}"
ENV __OW_ACTION_RAW "${inputs.params.OW_ACTION_RAW}"
EOF
- name: adapt-dockerfile-to-tekton
image: "gcr.io/kaniko-project/executor:debug"
command:
- sed
args:
- -i
- -e
- 's/COPY ./COPY .\/docker-source/g'
- ${inputs.params.DOCKERFILE}
- name: build-openwhisk-nodejs-runtime
image: "gcr.io/kaniko-project/executor:latest"
args: ["--destination=${inputs.params.TARGET_IMAGE_NAME}:${inputs.params.TARGET_IMAGE_TAG}", "--dockerfile=${inputs.params.DOCKERFILE}"]
When trying to build and push the image, am getting error:
conditions:
- lastTransitionTime: "2020-09-24T07:33:11Z"
"step-add-ow-env-to-dockerfile" exited with code 2 (image: "docker-pullable://gcr.io/kaniko-project/executor#sha256:0f27b0674797b56db08010dff799c8926c4e9816454ca56cc7844df228c53485"); for logs run: kubectl -n default logs task-run-helloworld-pod-5bbkx -c step-add-ow-env-to-dockerfile
reason: Failed
status: "False"
type: Succeeded
When checked the logs for error msg, I'm getting:
Error : /busybox/sh: syntax error: bad substitution

Related

Is it possible to achieve such a refactor in YAML

I'm working on a concourse pipeline and I need to duplicate a lot of code in my YAML so I'm trying to refactor it so it is easily maintainable and I don't end up with thousands of duplicates lines/blocks.
I have achieve the following yaml file after what seems to be the way to go but it doesn't fullfill all my needs.
add-rotm-points: &add-rotm-points
task: add-rotm-points
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ((registre))/polygone/concourse/cf-cli-python3
tag: 0.0.1
insecure_registries: [ ((registre)) ]
run:
path: source-pipeline/commun/rotm/trigger-rotm.sh
args: [ "source-pipeline", "source-code-x" ]
inputs:
- name: source-pipeline
- name: source-code-x
jobs:
- name: test-a
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-a
trigger: true
- <<: *add-rotm-points
- name: test-b
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-b
trigger: true
- <<: *add-rotm-points
My problem is that both my jobs uses the generic task defined at the top. But in the generic task I need to change source-code-x to the -a or -b version I use in my jobs.
I cannot find a way to achieve this without duplicating my anchor in every jobs and that seems to be counter productive. But i may not have full understood yaml anchors/merges.
All you need to do is map inputs on individual tasks, like this:
add-rotm-points: &add-rotm-points
task: add-rotm-points
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ((registre))/polygone/concourse/cf-cli-python3
tag: 0.0.1
insecure_registries: [ ((registre)) ]
run:
path: source-pipeline/commun/rotm/trigger-rotm.sh
args: [ "source-pipeline", "source-code-x" ]
inputs:
- name: source-pipeline
- name: source-code-x
jobs:
- name: test-a
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-a
trigger: true
- <<: *add-rotm-points
input_mapping:
source-code-x: source-code-a
- name: test-b
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-b
trigger: true
- <<: *add-rotm-points
input_mapping:
source-code-x: source-code-b
See Example Three in this blog: https://blog.concourse-ci.org/introduction-to-task-inputs-and-outputs/

I need Google Cloud Build "finaly step"

The problem: Some steps create entities in K8b that must eventually be removed regardless of the success of any other steps in the build.
Here is an example of cloudbuild.yaml
steps:
# TEST NAMESPACE
#
# some previous steps...
# package jar
# build container & etc.
# Kubernetes RUN DB
- name: 'gcr.io/cloud-builders/gke-deploy'
id: deploy-db
waitFor: ['-']
args: ['apply',
'--filename', './k8b/db/',
'--location', 'somewhere',
'--cluster', 'my-trololo-cluster']
# Run something else in Kubernetes
- name: 'gcr.io/cloud-builders/gke-deploy'
id: deploy-other-things
waitFor:
- 'deploy-db'
args: ['apply',
'--filename', './k8b/other-things/',
'--location', 'somewhere',
'--cluster', 'my-trololo-cluster']
# Test DB in pod
- name: 'gcr.io/cloud-builders/gke-deploy'
id: test-db
waitFor:
- 'deploy-other-things'
entrypoint: 'bash'
args: ['./scripts/test_db.sh']
# Run REST-API
- name: 'gcr.io/cloud-builders/gke-deploy'
id: deploy-rest
waitFor:
- 'deploy-other-things'
args: ['run',
'--filename', './k8b/rest/',
'--location', 'somewhere',
'--cluster', 'my-trololo-cluster' ]
# Test REST-API
- name: 'gcr.io/cloud-builders/gke-deploy'
id: test-REST-API
waitFor:
- 'deploy-rest'
- 'test-db'
entrypoint: 'bash'
args: ['./scripts/test_rest_api.sh']
# Cleanup steps
- name: 'gcr.io/cloud-builders/gke-deploy'
id: cleanup
waitFor:
- 'test-REST-API'
entrypoint: 'kubectl'
args: [ 'delete', '--filename', './k8b', '--recursive' ]
# Delete PERSISTENT VOLUME
- name: 'gcr.io/cloud-builders/gke-deploy'
id: delete-persistent-volume
waitFor:
- 'test-REST-API'
entrypoint: 'bash'
args:
- '-c'
- |
pvc_name=$(kubectl get pvc --selector=$sel -o jsonpath={.items..metadata.name})
kubectl delete pvc ${pvc_name}
So, if any step before the “cleanup steps” fails, the deletion of entities in GKE will not occur.
And the next cloudbuild runs will not happen on a clean cluster.
I can't find any solution to this case in the docs.
And I think that at the moment I can solve this problem using bash scripts at every step, and if there is a crash, I need to:
catch it in a bash script;
give a command to clear the cluster inside the bash script;
then exit the script with a non-zero code.
And so in every step.
But this is not a very good solution in my opinion. Maybe there is some better solution?

Changing directories in Cloud Build 'cd' no found

I'm using cloud build to clone a repository. I can confirm the repository clones successfully to the cloud build /workspace volume.
steps:
- id: 'Clone repository'
name: 'gcr.io/cloud-builders/git'
args: ['clone', $_REPO_URL]
volumes:
- name: 'ssh'
path: /root/.ssh
I then run the next step to confirm
- id: 'List'
name: 'alpine'
args: ['ls']
and it shows me the repository is in the current directory. But when I try and cd into the directory the cd command doesn't work and throws an error:
ERROR: build step 3 "alpine" failed: starting step container failed: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cd <repo-name>": executable file not found in $PATH: unknown
My ultimate goal is to cd into the repository and run some git commands. I use alpine later on because the git builder image doesn't allow me to use cd either.
substitutions:
_REPO_NAME: 'test-repo'
_REPO_URL: 'git#bitbucket.org:example/test-repo.git'
_BRANCH_NAME: 'feature/something'
steps:
- id: 'Clone repository'
name: 'gcr.io/cloud-builders/git'
args: ['clone', $_REPO_URL]
volumes:
- name: 'ssh'
path: /root/.ssh
- id: 'Check Diff'
name: 'alpine'
args: ['cd $_REPO_NAME', '&&', 'git checkout $_BRANCH_NAME', '&&', 'git diff main --name-only']
You can use bash to run any commands you would like.
Here is one example I use for one of my projects:
- name: 'gcr.io/cloud-builders/git'
id: Clone env repository
entrypoint: /bin/sh
args:
- '-c'
- |
git clone git#github.com:xyz/abc.git && \
cd gitops-env-repo/ && \
git checkout dev
Use dir field in your *.yaml file.
steps:
- name: string
args: [string, string, ...]
env: [string, string, ...]
dir: string
id: string
waitFor: [string, string, ...]
entrypoint: string
secretEnv: string
volumes: object(Volume)
timeout: string (Duration format)
- name: string
...
- name: string
...
timeout: string (Duration format)
queueTtl: string (Duration format)
logsBucket: string
options:
env: [string, string, ...]
secretEnv: string
volumes: object(Volume)
sourceProvenanceHash: enum(HashType)
machineType: enum(MachineType)
diskSizeGb: string (int64 format)
substitutionOption: enum(SubstitutionOption)
dynamicSubstitutions: boolean
logStreamingOption: enum(LogStreamingOption)
logging: enum(LoggingMode)
pool: object(PoolOption)
substitutions: map (key: string, value: string)
tags: [string, string, ...]
serviceAccount: string
secrets: object(Secret)
availableSecrets: object(Secrets)
artifacts: object (Artifacts)
images:
- [string, string, ...]
https://cloud.google.com/build/docs/build-config-file-schema

Why am I getting this error in Concourse? Error: No step configured

I am brand new to concourse and am trying to use it to make a terraform-ci platform and cannot figure out why im getting this error on my very first pipeline, can anyone help out?
jobs:
- name: terraform-pipeline
serial: true
plan:
- aggregate:
- get: master-branch
trigger: true
- get: common-tasks
params: { submodules: [ terraform ] }
trigger: true
- task: terraform-plan
file: common-tasks/terraform/0.12.29.yml
input_mapping: { source: master-branch }
params:
command: plan
cache: true
access_key: ((aws-access-key))
secret_key: ((aws-secret-key))
directory: master-branch/terraform-poc/dev
resources:
- name: master-branch
type: git
source:
uri: https://github.com/rossrollin/terraform-poc
branch: master
- name: common-tasks
type: git
source:
uri: https://github.com/telia-oss/concourse-tasks.git
branch: master
Executing pipeline like so:
fly -t concourse-poc sp -p terraform-pipeline -c pipeline2.yml -v aws-access-key=''-v aws-secret-key=''
error: error unmarshaling JSON: while decoding JSON: no step configured
The aggregate step was deprecated in version 5.2.0 and removed in version 7.0.0.
You need to replace it with the new in_parallel step.
- - aggregate:
+ - in_parallel:
Removing '- aggregate:' and just running the resource get's inline fixes my issue.

OpenShift : Waiting for image stream to be created

I am creating an installation script that will create resources off of YAML files†. This script will do the equivalent of this command:
oc new-app registry.access.redhat.com/rhscl/nginx-114-rhel7~http://github.com/username/repo.git
Three YAML files were created as follows:
imagestream for nginx-114-rhel7 - is-nginx.yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
build: build-repo
name: nginx-114-rhel7
namespace: ns
spec:
tags:
- annotations: null
from:
kind: DockerImage
name: registry.access.redhat.com/rhscl/nginx-114-rhel7
name: latest
referencePolicy:
type: Source
imagestream for repo - is-repo.yaml
apiVersion: v1
kind: ImageStream
metadata:
labels:
application: is-rp
name: is-rp
namespace: ns
buildconfig for repo (output will be imagestream for repo) - bc-repo.yaml
apiVersion: v1
kind: BuildConfig
metadata:
labels:
build: rp
name: bc-rp
namespace: ns
spec:
output:
to:
kind: ImageStreamTag
name: 'is-rp:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: dev_1.0
uri: 'http://github.com/username/repo.git'
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: 'nginx-114-rhel7:latest'
namespace: flo
type: Source
successfulBuildsHistoryLimit: 5
When these commands are run one after another,
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;oc start-build bc/bc-rep --wait
I get this error message,
The ImageStreamTag "nginx-114-rhel7:latest" is invalid: from: Error resolving ImageStreamTag nginx-114-rhel7:latest in namespace ns: unable to find latest tagged image
But, when I run the commands with a sleep before start-build, the build is triggered correctly.
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;sleep 5;oc start-build bc/bc-rep
How do I trigger start-build without entering a sleep command? The oc wait seems to work only for --for=condition and --for=delete. I do not know what value is to be used for --for=condition.
† - I do not see a clear guideline on creating installation scripts - with YAML or equivalent oc commands only - for deploying applications on to OpenShift.
Instead of running oc start-build, you should look into Image Change Triggers and Configuration Change Triggers
In your build config, you can point to an ImageStreamTag to start a build
type: "imageChange"
imageChange: {}
type: "imageChange"
imageChange:
from:
kind: "ImageStreamTag"
name: "custom-image:latest"
oc wait --for=condition=available only works when status object includes conditions, which is not the case for imagestreams.
status:
dockerImageRepository: image-registry.openshift-image-registry.svc:5000/test/s2i-openresty-centos7
tags:
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: quay.io/openresty/openresty-centos7#sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
generation: 2
image: sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
tag: builder
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: qquay.io/openresty/openresty-centos7#sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
generation: 2
image: sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
tag: runtime
Until openshift CLI implements builtin waiting command for imagestreams, what I used to do is: request imagestream object, parse status object for the expected tag and sleep few seconds if not ready. Something like this:
until oc get is nginx-114-rhel7 -o json || echo '{}' | jq '[.status.tags[] | select(.tag == "latest")] | length == 1' --exit-status; do
sleep 1
done

Resources