Tasks are not populated in concourse pipeline (Concourse version 7.8.2) - continuous-integration

I am brand new to the concourse and trying to set up a pipeline through the pipeline but why am I getting this error on my very first pipeline, can anyone help out?
'''
jobs:
name: update-library-cache
serial: true
plan:
get: library_cache
get: infra
trigger: true
in_parallel:
{task: update-library-cache, file:infra/ci/tasks/update-preprod-library-cache.yml}
{put: library_cache, params: {file: updated_cache/infra-cache-*.tgz}}
name: build
serial: true
plan:
{get: api, trigger: true}
get: infra
get: library_cache
in_parallel:
-steps:
{task: js, file: infra/ci/tasks/build-service.yml, privileged: true, params: {TARGET_CFG: ((alpha)), SERVICE: js}, attempts:2}
{task: api, file: infra/ci/tasks/build-service.yml, privileged: true, params: {TARGET_CFG: ((alpha)), SERVICE: api}, attempts:2}
{task: worker, file: infra/ci/tasks/build-service.yml, privileged: true, params: {TARGET_CFG: ((alpha)), SERVICE: worker}, attempts:2}
When Executing pipeline:
the task: js , api and worker are not populating in the pipeline.
Is the syntax is correct for:
- in_parallel:
-steps:
- {task: js, file: infra/ci/tasks/build-service.yml, privileged: true, params: {TARGET_CFG: ((alpha)), SERVICE: js}, attempts:2}
- {task: api, file: infra/ci/tasks/build-service.yml, privileged: true, params: {TARGET_CFG: ((alpha)), SERVICE: api}, attempts:2}
- {task: worker, file: infra/ci/tasks/build-service.yml, privileged: true, params: {TARGET_CFG: ((alpha)), SERVICE: worker}, attempts:2}

Related

Serverless Framework - Unable to Deploy Step Function

I have the following serverless yaml that I'm using to try to deploy my first step function:
org: bizrob
app: flexipod-2-queue
service: flexipod-2-queue
frameworkVersion: "2 || 3"
package:
exclude:
# list of biggest modules that are in devdepenedecies and similar
- node_modules/aws-sdk/**
- node_modules/serverless-domain-manager/**
- node_modules/#serverless
- node_modules/serverless
- node_modules/java-invoke-local
- node_modules/tabtab
- node_modules/snappy
custom:
region: eu-west-1
provider:
name: aws
runtime: nodejs14.x
plugins:
- serverless-step-functions
functions:
GetConfigDbConnection:
handler: flexipod-2-queue/dbConfig.getConfigDbConnection
environment:
REGION: ${self:custom.region}
GetConfigRec:
handler: flexipod-2-queue/dbConfig.getConfigRec
environment:
REGION: ${self:custom.region}
GetSelectQueries:
handler: flexipod-2-queue/dbConfig.getSelectQueries
environment:
REGION: ${self:custom.region}
PullSqlSvr:
handler: flexipod-2-queue/pullSqlSvrData.pullSqlSvr
environment:
REGION: ${self:custom.region}
API_VERSION_S3: "2006-03-01"
API_VERSION_SQS: "2012-11-05"
SQS_QUEUE_URL: !Ref "MyQueue"
SendToDataLake:
handler: queue-2-datalake/sendToDataLake.sendBatchToQueue
environment:
REGION: ${self:custom.region}
API_VERSION_S3: "2006-03-01"
API_VERSION_SQS: "2012-11-05"
stepFunctions:
stateMachines:
flexipodFlow:
name: flexipodFlow
definition:
StartAt: GetConfigDbConnection
States:
GetConfigDbConnection:
Type: Task
Resource:
Fn::GetAtt: [GetConfigDbConnection, Arn]
Next: GetConfigRec
GetConfigRec:
Type: Task
Resource:
Fn::GetAtt: [GetConfigRec, Arn]
Next: GetSelectQueries
GetSelectQueries:
Type: Task
Resource:
Fn::GetAtt: [GetSelectQueries, Arn]
ResultPath: $.queries
Next: Map
Map:
Type: Map
ItemsPath: $.queries
MaxConcurrency: 2
Next: Final State
Iterator:
StartAt: PullSql
States:
PullSql:
Type: Task
Resource:
Fn::GetAtt: [PullSqlSvr, Arn]
Final State:
Type: Pass
End: true
resources:
Resources:
MyQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: "flexipod"
After running serverless deploy, I see get the following error in the vscode terminal:
Error:
TypeError: Cannot read property 'match' of null
at C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\serverless-step-functions\lib\deploy\stepFunctions\compileIamRole.js:472:61
at arrayMap (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\lodash\lodash.js:653:23)
at map (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\lodash\lodash.js:9622:14)
at Function.flatMap (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\lodash\lodash.js:9325:26) at ServerlessStepFunctions.getIamPermissions (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\serverless-step-functions\lib\deploy\stepFunctions\compileIamRole.js:413:12)
at C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\serverless-step-functions\lib\deploy\stepFunctions\compileIamRole.js:522:56
at Array.forEach (<anonymous>)
at ServerlessStepFunctions.compileIamRole (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\serverless-step-functions\lib\deploy\stepFunctions\compileIamRole.js:511:32)
at ServerlessStepFunctions.tryCatcher (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\bluebird\js\release\util.js:16:23)
at Promise._settlePromiseFromHandler (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\bluebird\js\release\promise.js:547:31)
at Promise._settlePromise (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\bluebird\js\release\promise.js:604:18)
at Promise._settlePromiseCtx (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\bluebird\js\release\promise.js:641:10)
at _drainQueueStep (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\bluebird\js\release\async.js:97:12)
at _drainQueue (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\bluebird\js\release\async.js:86:9)
at Async._drainQueues (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\bluebird\js\release\async.js:102:5)
at Immediate.Async.drainQueues [as _onImmediate] (C:\GitBizTalkers\OLD_Wck_Flexipod\node_modules\bluebird\js\release\async.js:15:14)
at processImmediate (node:internal/timers:464:21)
Anyone see what I've done wrong?
Problem was due to yaml formatting. Line 192
Fn::GetAtt: [PullSqlSvr, Arn]
This needed an extra tab to indent below "Resource:"

ECK Filebeat Daemonset Forwarding To Remote Cluster

I wish to forward logs from remote EKS clusters to a centralised EKS cluster hosting ECK.
Versions in use:
EKS v1.20.7
Elasticsearch v7.7.0
Kibana v7.7.0
Filebeat v7.10.0
The setup is using a AWS NLB to forward requests to Nginx ingress, using host based routing.
When the DNS lookup (filebeat test output) for the Elasticsearch is tested on Filebeat, it validates the request.
But the logs for Filebeat are telling a different story.
2021-10-05T10:39:00.202Z ERROR [publisher_pipeline_output]
pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)):
Get "https://elasticsearch.dev.example.com:9200": Bad Request
The Filebeat agents can connect to the remote Elasticsearch via the NLB, when using a curl request.
The config is below. NB: dev.example.com is the remote cluster hosing ECK.
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: dev-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: "elasticsearch.dev.example.com"
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
headers:
Host: "elasticsearch.dev.example.com"
proxy_url: "https://example.elb.eu-west-2.amazonaws.com"
proxy_disable: false
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
Any tips or suggestions on how to enable Filebeat forwarding, would be much appreciated :-)
#1 Missing ports:
Even with the ports added in as suggested. Filebeat is erroring with:
2021-10-06T08:34:41.355Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)): Get "https://elasticsearch.dev.example.com:9200": Bad Request
...using a AWS NLB to forward requests to Nginx ingress, using host based routing
How about unset proxy_url and proxy_disable, then set hosts: ["<nlb url>:<nlb listener port>"]
The final working config:
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: qa-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: ["elasticsearch.dev.example.com:9200"]
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
In addition the following changes were needed:
NBL:
Add listener for 9200 forwarding to the Ingress Controller for HTTPS
SG:
Opened up port 9200 on the EKS worker nodes

Ansible handler for restarting Docker Swarm service

I need to restart containers of a Docker Swarm service with Ansible.
The basic definition looks like this:
# tasks/main.yml
- name: 'Create the service container'
docker_swarm_service:
name: 'service'
image: 'service'
networks:
- name: 'internet'
- name: 'reverse-proxy'
publish:
- { target_port: '80', published_port: '80', mode: 'ingress' }
- { target_port: '443', published_port: '443', mode: 'ingress' }
- { target_port: '8080', published_port: '8080', mode: 'ingress' }
mounts:
- { source: '{{ shared_dir }}', target: '/shared' }
replicas: 1
placement:
constraints:
- node.role == manager
restart_config:
condition: 'on-failure'
user: null
force_update: yes
So I thought that
# handlers/main.yml
- name: 'Restart Service'
docker_swarm_service:
name: some-service
image: 'some-image'
force_update: yes
should work as a handler but it seems that it's not taking all options.
So any advice how to properly restart containers of a Docker Swarm service?

Jenkins JNLP slave is stuck on progress bar when need to run Maven job

I have a problem with my Jenkins which runs on K8s.
My Pipeline is built with 2 pods - Jnlp alpine (default for k8s) and Maven (3.6.0 based on java-8:jdk-8u191-slim image).
From time to time, after staring a new build it's getting stuck with no progress with the Build.
Entering the Pod:
Jnlp - seems to be functioning as expected
Maven - no job is running (running ps -ef).
Appreciate your help.
Tried to pause / resume - not solve it.
The only way is abort and reinitiate the build.
Jenkins version - 2.164.1
My pipeline is :
properties([[$class: 'RebuildSettings', autoRebuild: false, rebuildDisabled: false],
parameters([string(defaultValue: 'master', description: '', name: 'branch', trim: false),
string(description: 'enter your namespace name', name: 'namespace', trim: false),])])
def label = "jenkins-slave-${UUID.randomUUID().toString()}"
podTemplate(label: label, namespace: "${params.namespace}", yaml: """
apiVersion: v1
kind: Pod
spec:
nodeSelector:
group: maven-ig
containers:
- name: maven
image: accontid.dkr.ecr.us-west-2.amazonaws.com/base_images:maven-base
command: ['cat']
resources:
limits:
memory: "16Gi"
cpu: "16"
requests:
memory: "10Gi"
cpu: "10"
tty: true
env:
- name: ENVIRONMENT
value: dev
- name: config.env
value: k8s
- mountPath: "/local"
name: nvme
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: nvme
hostPath:
path: /local
"""
) {
node(label) {
wrap([$class: 'TimestamperBuildWrapper']) {
checkout([$class: 'GitSCM', branches: [[name: "*/${params.branch}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'credetials', url: 'https://github.com/example/example.git']]])
wrap([$class: 'BuildUser']) {
user = env.BUILD_USER_ID
}
currentBuild.description = 'Branch: ' + "${branch}" + ' | Namespace: ' + "${user}"
stage('Stable tests') {
container('maven') {
try {
sh "find . -type f -name '*k8s.*ml' | xargs sed -i -e 's|//mysql|//mysql.${user}.svc.cluster.local|g'"
sh "mvn -f pom.xml -Dconfig.env=k8s -Dwith_stripe_stub=true -Dpolicylifecycle.integ.test.url=http://testlifecycleservice-${user}.testme.io -Dmaven.test.failure.ignore=false -Dskip.surefire.tests=true -Dmaven.test.skip=false -Dskip.package.for.deployment=true -T 1C clean verify -fn -P stg-stable-it"
}
finally {
archive "**/target/failsafe-reports/*.xml"
junit '**/target/failsafe-reports/*.xml'
}
}
}
// stage ('Delete namespace') {
// build job: 'delete-namspace', parameters: [
// [$class: 'StringParameterValue', name: 'namespace', value: "${user}"]], wait: false
// }
}
}
}

Codeception: DB is not rolling back in API testing

I am testing some certain parts of my API and have noticed that a specific table the DB is not rolling back when my test fail.
Here is my api.suite.yml file:
class_name: ApiTester
modules:
enabled:
- REST:
depends: Laravel5
- \Helper\Api
- Db
And this is my code in codeception.yml:
actor: Tester
paths:
tests: tests
log: tests/_output
data: tests/_data
support: tests/_support
envs: tests/_envs
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
extensions:
enabled:
- Codeception\Extension\RunFailed
modules:
config:
Db:
dsn: 'mysql:host=127.0.0.1;dbname=test_records'
user: 'root'
password: ''
dump: 'tests/_data/dump.sql'
populate: true
cleanup: true
reconnect: true
Any pointers would be highly appreciated.

Resources