How can I act on the last created node in my jelastic installation manifest? - jelastic

I have the following jps manifest:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
- nodeType: docker
count: 1
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
onAfterAddNode:
- installDocker
actions:
installDocker:
- cmd:
- myDockerInstallScript.sh
My problem is that the onAfterAddNode actions are not called, even though the node was added successfully. What am I doing wrong? How can I guarantee the commands will be run on the added node only?
EDIT
My use case is the following: I have created an environment a while ago, which I would like to add new nodes to. Therefore, I need to update that environment with the addition of new nodes and with some installation steps on those new nodes.

If you need to perform an action only in a newly created node, then you can do it like this:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
nodeType: docker
count: 1
nodeGroup: runner
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
- installDocker: ${nodes.runner.last.id}
actions:
installDocker:
- cmd [${this}]:
- myDockerInstallScript.sh
Also, thanks for the comment about the documentation, we have updated it: https://docs.cloudscripting.com/creating-manifest/actions/#addnodes

The execution of other events in the environment occurs only after the successful completion of the onInstall event. The onAfterAddNode event will run the next time a node is added. Here you can see the sequence of events. If you just need to call the action during installation, then you need to do this in onInstall:
Example:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
- nodeType: docker
count: 1
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
- installDocker
actions:
installDocker:
- cmd:
- myDockerInstallScript.sh
If it is necessary that a certain action is also performed each time a node is added to the topology of the environment, then you can do this such way:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
- nodeType: docker
count: 1
nodeGroup: runner
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
- installDocker
onAfterAddNode [runner]:
- installDocker
actions:
installDocker:
- cmd [runner]:
- myDockerInstallScript.sh
If you want a specific action to be performed after scaling the entire layer, then you can do it this way:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
- nodeType: docker
count: 1
nodeGroup: runner
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
- installDocker
onAfterScaleOut [runner]:
forEach(event.response.nodes):
installDocker: ${#i.id}
actions:
installDocker:
- cmd [${this}]:
- myDockerInstallScript.sh

Related

Build pipeline name is not displayed as expected

I have this pipeline file:
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
branches:
include:
- main
- issues*
- tasks*
paths:
exclude:
- documentation/*
- Readme.md
variables:
- name: majorVersion
value: 1
- name: minorVersion
value: 0
- name: revision
value: $[counter(variables['minorVersion'],0)]
- name: buildVersion
value: $(majorVersion).$(minorVersion).$(revision)
name: $(buildVersion)
and I expect the pipeline name to be 1.0.0
but instead it is a string $(majorVersion).$(minorVersion).$(revision)
where did i get the formatting wrong?

Is it possible to achieve such a refactor in YAML

I'm working on a concourse pipeline and I need to duplicate a lot of code in my YAML so I'm trying to refactor it so it is easily maintainable and I don't end up with thousands of duplicates lines/blocks.
I have achieve the following yaml file after what seems to be the way to go but it doesn't fullfill all my needs.
add-rotm-points: &add-rotm-points
task: add-rotm-points
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ((registre))/polygone/concourse/cf-cli-python3
tag: 0.0.1
insecure_registries: [ ((registre)) ]
run:
path: source-pipeline/commun/rotm/trigger-rotm.sh
args: [ "source-pipeline", "source-code-x" ]
inputs:
- name: source-pipeline
- name: source-code-x
jobs:
- name: test-a
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-a
trigger: true
- <<: *add-rotm-points
- name: test-b
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-b
trigger: true
- <<: *add-rotm-points
My problem is that both my jobs uses the generic task defined at the top. But in the generic task I need to change source-code-x to the -a or -b version I use in my jobs.
I cannot find a way to achieve this without duplicating my anchor in every jobs and that seems to be counter productive. But i may not have full understood yaml anchors/merges.
All you need to do is map inputs on individual tasks, like this:
add-rotm-points: &add-rotm-points
task: add-rotm-points
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ((registre))/polygone/concourse/cf-cli-python3
tag: 0.0.1
insecure_registries: [ ((registre)) ]
run:
path: source-pipeline/commun/rotm/trigger-rotm.sh
args: [ "source-pipeline", "source-code-x" ]
inputs:
- name: source-pipeline
- name: source-code-x
jobs:
- name: test-a
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-a
trigger: true
- <<: *add-rotm-points
input_mapping:
source-code-x: source-code-a
- name: test-b
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-b
trigger: true
- <<: *add-rotm-points
input_mapping:
source-code-x: source-code-b
See Example Three in this blog: https://blog.concourse-ci.org/introduction-to-task-inputs-and-outputs/

Task with loop in Argo workflow

I want to introduce a for loop in a workflow that consists of 2 individual tasks. The second will be dependent on the first. Each one should use different templates. The second should iterate with {{item}}. For each iteration I want to know if the default is to execute only the second task or it will re-execute the whole flow?
To repeat the second step only, use withItems/withParameter (there is no withArtifact, though you can get the same behavior with data). These loops repeat the specific step they are mentioned in for the specified items/parameter only.
- name: create-resources
inputs:
paramet`enter code here`ers:
- name: env
- name: giturl
- name: resources
- name: awssecret
dag:
tasks:
- name: resource
template: resource-create
arguments:
parameters:
- name: env
value: "{{inputs.parameters.env}}"
- name: giturl
value: "{{inputs.parameters.giturl}}"
- name: resource
value: "{{item}}"
- name: awssecret
value: "{{inputs.parameters.awssecret}}"
withParam: "{{inputs.parameters.resources}}"
############# For parallel execution use steps ##############
steps:
- - name: resource
template: resource-create
arguments:
parameters:
- name: env
value: "{{inputs.parameters.env}}"
- name: giturl
value: "{{inputs.parameters.giturl}}"
- name: resource
value: "{{item}}"
- name: awssecret
value: "{{inputs.parameters.awssecret}}"
withParam: "{{inputs.parameters.resources}}"

Why am I getting this error in Concourse? Error: No step configured

I am brand new to concourse and am trying to use it to make a terraform-ci platform and cannot figure out why im getting this error on my very first pipeline, can anyone help out?
jobs:
- name: terraform-pipeline
serial: true
plan:
- aggregate:
- get: master-branch
trigger: true
- get: common-tasks
params: { submodules: [ terraform ] }
trigger: true
- task: terraform-plan
file: common-tasks/terraform/0.12.29.yml
input_mapping: { source: master-branch }
params:
command: plan
cache: true
access_key: ((aws-access-key))
secret_key: ((aws-secret-key))
directory: master-branch/terraform-poc/dev
resources:
- name: master-branch
type: git
source:
uri: https://github.com/rossrollin/terraform-poc
branch: master
- name: common-tasks
type: git
source:
uri: https://github.com/telia-oss/concourse-tasks.git
branch: master
Executing pipeline like so:
fly -t concourse-poc sp -p terraform-pipeline -c pipeline2.yml -v aws-access-key=''-v aws-secret-key=''
error: error unmarshaling JSON: while decoding JSON: no step configured
The aggregate step was deprecated in version 5.2.0 and removed in version 7.0.0.
You need to replace it with the new in_parallel step.
- - aggregate:
+ - in_parallel:
Removing '- aggregate:' and just running the resource get's inline fixes my issue.

/busybox/sh: syntax error: bad substitution with Tekton

I'm trying to pull source code from Github then build and push a docker image to docker hub using Tekton pipeline and Knative on Kubernetes cluster.
I'm following this link for the installation and setup of Tekton:
https://www.ibm.com/cloud/blog/build-a-knative-service-with-tekton-and-apache-openwhisk-nodejs-runtime
task-build.yaml
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: task-build
spec:
inputs:
resources:
- name: docker-source
type: git
params:
- name: TARGET_IMAGE_NAME
description: name of the image to be tagged and pushed
- name: TARGET_IMAGE_TAG
description: tag the image before pushing
default: "latest"
- name: DOCKERFILE
description: name of the dockerfile
- name: OW_RUNTIME_DEBUG
description: flag to indicate debug mode should be on/off
default: "false"
- name: OW_RUNTIME_PLATFORM
description: flag to indicate the platform, one of ["openwhisk", "knative", ... ]
default: "knative"
- name: OW_ACTION_NAME
description: name of the action
default: "foo"
- name: OW_ACTION_CODE
description: JavaScript source code to be evaluated
default: ""
- name: OW_ACTION_MAIN
description: name of the function in the "__OW_ACTION_CODE" to call as the action handler
default: "main"
- name: OW_ACTION_BINARY
description: flag to indicate zip function, for zip actions, "__OW_ACTION_CODE" must be base64 encoded string
default: "false"
- name: OW_HTTP_METHODS
description: list of HTTP methods, any combination of [GET, POST, PUT, and DELETE], default is [POST]
default: "[POST]"
- name: OW_ACTION_RAW
description: flag to indicate raw HTTP handling, interpret and process an incoming HTTP body directly
default: "false"
outputs:
resources:
- name: builtImage
type: image
steps:
- name: add-ow-env-to-dockerfile
image: "gcr.io/kaniko-project/executor:debug"
command:
- /busybox/sh
args:
- -c
- |
cat <<EOF >> ${inputs.params.DOCKERFILE}
ENV __OW_RUNTIME_DEBUG "${inputs.params.OW_RUNTIME_DEBUG}"
ENV __OW_RUNTIME_PLATFORM "${inputs.params.OW_RUNTIME_PLATFORM}"
ENV __OW_ACTION_NAME "${inputs.params.OW_ACTION_NAME}"
ENV __OW_ACTION_CODE "${inputs.params.OW_ACTION_CODE}"
ENV __OW_ACTION_MAIN "${inputs.params.OW_ACTION_MAIN}"
ENV __OW_ACTION_BINARY "${inputs.params.OW_ACTION_BINARY}"
ENV __OW_HTTP_METHODS "${inputs.params.OW_HTTP_METHODS}"
ENV __OW_ACTION_RAW "${inputs.params.OW_ACTION_RAW}"
EOF
- name: adapt-dockerfile-to-tekton
image: "gcr.io/kaniko-project/executor:debug"
command:
- sed
args:
- -i
- -e
- 's/COPY ./COPY .\/docker-source/g'
- ${inputs.params.DOCKERFILE}
- name: build-openwhisk-nodejs-runtime
image: "gcr.io/kaniko-project/executor:latest"
args: ["--destination=${inputs.params.TARGET_IMAGE_NAME}:${inputs.params.TARGET_IMAGE_TAG}", "--dockerfile=${inputs.params.DOCKERFILE}"]
When trying to build and push the image, am getting error:
conditions:
- lastTransitionTime: "2020-09-24T07:33:11Z"
"step-add-ow-env-to-dockerfile" exited with code 2 (image: "docker-pullable://gcr.io/kaniko-project/executor#sha256:0f27b0674797b56db08010dff799c8926c4e9816454ca56cc7844df228c53485"); for logs run: kubectl -n default logs task-run-helloworld-pod-5bbkx -c step-add-ow-env-to-dockerfile
reason: Failed
status: "False"
type: Succeeded
When checked the logs for error msg, I'm getting:
Error : /busybox/sh: syntax error: bad substitution

Resources