How to get status of Kaniko build - jenkins-pipeline

After doing a Kaniko build to Openshift I notice that Jenkins displays the status as Success as long as nothing is wrong with the code itself in the Jenkinsfile.
However the after the Success message the build is still going on in Openshift and it could go wrong depending on the Dockerfile contexts etc..
How could I have Jenkins wait for the status of the build in Openshift and have it display its results in the Jenkins console??
Here is my code:
#!/usr/bin/env groovy
def projectProperties = [
[$class: 'BuildDiscarderProperty',strategy: [$class: 'LogRotator', numToKeepStr: '5']]
]
node {
withVault(configuration: [timeout: 60, vaultCredentialId: 'vault-approle-prd', vaultUrl: 'https://vault.example.redacted.com'], vaultSecrets: [[path: 'secrets/kaniko', secretValues: [[vaultKey: 'ghp_key']]]])
{
sh 'echo $ghp_key > output.txt'
}
}
node {
GHP_KEY = readFile('output.txt').trim()
}
pipeline {
agent {
kubernetes {
cloud 'openshift'
idleMinutes 15
activeDeadlineSeconds 1800
yaml """
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
volumes:
- name: build-context
emptyDir: {}
- name: kaniko-secret
secret:
secretName: regcred-${NAMESPACE}
items:
- key: .dockerconfigjson
path: config.json
securityContext:
runAsUser: 0
serviceAccount: kaniko
initContainers:
- name: kaniko-init
image: ubuntu
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--context=git://${GHP_KEY}#github.com/Redacted/dockerfiles.git#refs/heads/${BRANCH}",
"--destination=image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG}",
"--dockerfile=/jenkins-slave-ansible/Dockerfile",
"--skip-tls-verify",
"--verbosity=debug"]
resources:
limits:
cpu: 1
memory: 5Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: build-context
mountPath: /kaniko/build-context
- name: kaniko-secret
mountPath: /kaniko/.docker
restartPolicy: Never
"""
}
}
parameters {
choice(name: 'NAMESPACE', choices: ['cloud', 'ce-jenkins-testing'])
string(defaultValue: 'feature/', description: 'Please enter your branch name', name: 'BRANCH')
string(defaultValue: 'jenkins-slave-ansible', description: 'Please enter your image name (e.g.: jenkins-slave-ansible)', name: 'IMAGE_NAME')
string(defaultValue: 'latest', description: 'Please add your tag (e.g.: 1.72.29)', name: 'IMAGE_TAG')
}
options {
timestamps ()
}
stages {
stage('Image Test') {
steps {
echo "Image name: ${IMAGE_NAME}"
echo "Image tag: ${IMAGE_TAG}"
}
}
}
}

Related

Mariadb on kubernetes Got an error reading communication packets

I try to deploy an application with a mariadb database on my k8s cluster. This is the deployment i use:
apiVersion: v1
kind: Service
metadata:
name: app-back
labels:
app: app-back
namespace: dev
spec:
type: ClusterIP
ports:
- port: 8080
name: app-back
selector:
app: app-back
---
apiVersion: v1
kind: Service
metadata:
name: app-db
labels:
app: app-db
namespace: dev
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 3306
name: app-db
selector:
app: app-db
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
60-server.cnf: |
[mysqld]
bind-address = 0.0.0.0
skip-name-resolve
connect_timeout = 600
net_read_timeout = 600
net_write_timeout = 600
max_allowed_packet = 256M
default-time-zone = +00:00
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-db
namespace: dev
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: app-db
template:
metadata:
labels:
app: app-db
spec:
containers:
- name: app-db
image: mariadb:10.5.8
env:
- name: MYSQL_DATABASE
value: app
- name: MYSQL_USER
value: app
- name: MYSQL_PASSWORD
value: app
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: "true"
ports:
- containerPort: 3306
name: app-db
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "400Mi"
cpu: "200m"
volumeMounts:
- name: config-volume
mountPath: /etc/mysql/conf.d
volumes:
- name: config-volume
configMap:
name: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-back
namespace: dev
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: app-back
template:
metadata:
labels:
app: app-back
spec:
containers:
- name: app-back
image: private-repository/app/app-back:latest
env:
- name: spring.profiles.active
value: dev
- name: DB_HOST
value: app-db
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: app
- name: DB_USER
value: app
- name: DB_PASSWORD
value: app
ports:
- containerPort: 8080
name: app-back
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "200Mi"
cpu: "400m"
imagePullSecrets:
- name: docker-private-credentials
When i run this, the mariadb container log the following warning :
2020-12-03 8:23:41 28 [Warning] Aborted connection 28 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 25 [Warning] Aborted connection 25 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 31 [Warning] Aborted connection 31 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 29 [Warning] Aborted connection 29 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
...
My app is stuck on trying to connect to the database. The application is a Sprinbboot application build with this dockerfile:
FROM maven:3-adoptopenjdk-8 AS builder
WORKDIR /usr/src/mymaven/
COPY . .
RUN mvn clean package -e -s settings.xml -DskipTests
FROM tomcat:9-jdk8-adoptopenjdk-hotspot
ENV spring.profiles.active=dev
ENV DB_HOST=localhost
ENV DB_PORT=3306
ENV DB_NAME=app
ENV DB_USER=app
ENV DB_PASSWORD=app
COPY --from=builder /usr/src/mymaven/target/app.war /usr/local/tomcat/webapps/
Any idea?
Ok, i found the solution. This was not an error of mariadb. This is due to apache that break the connection if running inside a container with to low memory. Set memory limit to 1500Mi solved the problem.

server-deployment.yml not reading values from server-config.yml in Spring Cloud Data flow server

I have deployed the Custom Built SCDF 2.52 in openshift environment which is up and running successfully. I followed the guide 2.5.0.RELEASE_Guide. The Issue is the the properties given in server-config are not being considered by server-deployment.yaml file when I mount them. Though I could see the mappings for application.yaml is visible in deployment configuration, the properties are not read while the server is starting.
So when I build the custom scdf I have to add all the server properties including kubernetes memory limits, oracle datasource(External Datasource) properties in the scdf projects' application.properties file. Only then values of kube properties are being read platform being setup and External oracle datasource is getting connected. Below are the files that I'm using. I'm new to this SCDF and kubernetes. So please let me know if i'm missing anything anywhere.
Also why I added the kubernetes properties in application.properties of custom scdf project. Reason here in this question
server-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: scdf-server
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
datasource:
url: jdbc:oracle:thin:#hostname:port/db
username: root
password: oracle-root-password
driver-class-name: oracle.jdbc.OracleDriver
testOnBorrow: true
validationQuery: "SELECT 1"
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
selector:
matchLabels:
app: scdf-server
replicas: 1
template:
metadata:
labels:
app: scdf-server
spec:
containers:
- name: scdf-server
image: docker-registry.default.svc:5000/batchadmin/scdf-server
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /config
readOnly: true
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /management/health
port: 80
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /management/info
port: 80
initialDelaySeconds: 45
resources:
limits:
cpu: 1.0
memory: 2048Mi
requests:
cpu: 0.5
memory: 1024Mi
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: SERVER_PORT
value: '80'
- name: SPRING_CLOUD_CONFIG_ENABLED
value: 'false'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSED_TASK_RUNNER_URI
value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.6.0.BUILD-SNAPSHOT'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_ENABLE_API
value: 'false'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API
value: 'false'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_PATHS
value: /etc/secrets
- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
value : 'true'
- name: SPRING_CLOUD_DATAFLOW_SERVER_URI
value: 'http://${SCDF_SERVER_SERVICE_HOST}:${SCDF_SERVER_SERVICE_PORT}'
# Add Maven repo for metadata artibatcht resolution for all stream apps
- name: SPRING_APPLICATION_JSON
value: "{ \"maven\": { \"local-repository\": null, \"remote-repositories\": { \"repo1\": { \"url\": \"https://repo.spring.io/libs-snapshot\"} } } }"
serviceAccountName: scdf-sa
volumes:
- name: config
configMap:
name: scdf-server
items:
- key: application.yaml
path: application.yaml
application.properties - the Only thing that runs the SCDF right now.
spring.application.name=batchadmin
spring.datasource.url=jdbc:oracle:thin:#hostname:port/db
spring.datasource.username=root
spring.datasource.password=oracle_root_password
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.image-pull-policy= always
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.entry-point-style= exec
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.limits.cpu=2
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.limits.memory=1024Mi
spring.flyway.enabled=false
spring.jpa.show-sql=true
spring.jpa.hibernate.use-new-id-generator-mappings=true
logging.level.root=info
logging.file.max-size=5GB
logging.file.max-history=30
logging.pattern.console=%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] %-5level %logger.%M - %msg%n
My main concern here apart from the above issue is db password. Since SCDF passes all the application.properties related to datasource and kubernetes as job_parameters including the db password, the password is being printed in the logs, visible in the running pod config and in batch_job_execution_params.
Application.properties as Job params
To summarize the issues here as questions,
server-config.yaml properties are not being used by server-deployment.yaml? What went wrong?
Since I pass server properties from application.prop file all the properties are visible in logs as well as Db. So is there a way I could hide them?
Thanks in advance.
server-role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: scdf-role
rules:
- apiGroups: [""]
resources: ["services", "pods", "replicationcontrollers", "persistentvolumeclaims"]
verbs: ["get", "list", "watch", "create", "delete", "update"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets", "deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: ["extensions"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["create", "delete", "get", "list", "watch", "update", "patch"]

How can I edit path.data and path.log for elasticsearch on Kubernetes?

I made an es-deploy.yml file then I typed the path.log and path.data values.
After creating the pod, I checked that directory then there was nothing.
The setting did not work!
How can I edit path.data and path.log for elasticsearch on Kubernetes!
I also tried using PATH_DATA
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "CLUSTER_NAME"
value: "myesdb"
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
- name: "path.data"
value: "/data/elk/data"
- name: "path.logs"
value: "/data/elk/log"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- mountPath: /data/elk/
Those values path.data and path.logs are not environment variables. They are config options.
The default path.data for the official elasticsearch image is /usr/share/elasticsearch/data based on the default value of ES_HOME=/usr/share/elasticsearch/ If you don't want to use that path you have to override it in the elasticsearch.yaml config.
You will have to create a ConfigMap containing your elasticsearch.yaml with something like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
namespace: es
data:
elasticsearch.yml: |
cluster:
name: ${CLUSTER_NAME:elasticsearch-default}
node:
master: ${NODE_MASTER:true}
data: ${NODE_DATA:true}
name: ${NODE_NAME}
ingest: ${NODE_INGEST:true}
max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1}
processors: ${PROCESSORS:1}
network.host: ${NETWORK_HOST:_site_}
path:
data: ${DATA_PATH:"/data/elk"}
repo: ${REPO_LOCATIONS:[]}
bootstrap:
memory_lock: ${MEMORY_LOCK:false}
http:
enabled: ${HTTP_ENABLE:true}
compression: true
cors:
enabled: true
allow-origin: "*"
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery}
minimum_master_nodes: ${NUMBER_OF_MASTERS:1}
xpack:
license.self_generated.type: basic
(Note that the above ConfigMap will also allow you to use the DATA_PATH environment variable)
Then mount your volumes in your Pod with something like this:
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config-volume
configMap:
name: elasticsearch-config
- name: storage
<add-whatever-volume-you-are-using-for-data>

Jenkins JNLP slave is stuck on progress bar when need to run Maven job

I have a problem with my Jenkins which runs on K8s.
My Pipeline is built with 2 pods - Jnlp alpine (default for k8s) and Maven (3.6.0 based on java-8:jdk-8u191-slim image).
From time to time, after staring a new build it's getting stuck with no progress with the Build.
Entering the Pod:
Jnlp - seems to be functioning as expected
Maven - no job is running (running ps -ef).
Appreciate your help.
Tried to pause / resume - not solve it.
The only way is abort and reinitiate the build.
Jenkins version - 2.164.1
My pipeline is :
properties([[$class: 'RebuildSettings', autoRebuild: false, rebuildDisabled: false],
parameters([string(defaultValue: 'master', description: '', name: 'branch', trim: false),
string(description: 'enter your namespace name', name: 'namespace', trim: false),])])
def label = "jenkins-slave-${UUID.randomUUID().toString()}"
podTemplate(label: label, namespace: "${params.namespace}", yaml: """
apiVersion: v1
kind: Pod
spec:
nodeSelector:
group: maven-ig
containers:
- name: maven
image: accontid.dkr.ecr.us-west-2.amazonaws.com/base_images:maven-base
command: ['cat']
resources:
limits:
memory: "16Gi"
cpu: "16"
requests:
memory: "10Gi"
cpu: "10"
tty: true
env:
- name: ENVIRONMENT
value: dev
- name: config.env
value: k8s
- mountPath: "/local"
name: nvme
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: nvme
hostPath:
path: /local
"""
) {
node(label) {
wrap([$class: 'TimestamperBuildWrapper']) {
checkout([$class: 'GitSCM', branches: [[name: "*/${params.branch}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'credetials', url: 'https://github.com/example/example.git']]])
wrap([$class: 'BuildUser']) {
user = env.BUILD_USER_ID
}
currentBuild.description = 'Branch: ' + "${branch}" + ' | Namespace: ' + "${user}"
stage('Stable tests') {
container('maven') {
try {
sh "find . -type f -name '*k8s.*ml' | xargs sed -i -e 's|//mysql|//mysql.${user}.svc.cluster.local|g'"
sh "mvn -f pom.xml -Dconfig.env=k8s -Dwith_stripe_stub=true -Dpolicylifecycle.integ.test.url=http://testlifecycleservice-${user}.testme.io -Dmaven.test.failure.ignore=false -Dskip.surefire.tests=true -Dmaven.test.skip=false -Dskip.package.for.deployment=true -T 1C clean verify -fn -P stg-stable-it"
}
finally {
archive "**/target/failsafe-reports/*.xml"
junit '**/target/failsafe-reports/*.xml'
}
}
}
// stage ('Delete namespace') {
// build job: 'delete-namspace', parameters: [
// [$class: 'StringParameterValue', name: 'namespace', value: "${user}"]], wait: false
// }
}
}
}

Cannot access Kibana dashboard

I am trying to deploy Kibana in my Kubernetes cluster which is on AWS. To access the Kibana dashboard I have created an ingress which is mapped to xyz.com. Here is my Kibana deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.3.2
env:
- name: CLUSTER_NAME
value: myesdb
- name: SERVER_BASEPATH
value: /
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config
Whenever I deploy it, it gives me the following error. What should my SERVER_BASEPATH be in order for it to work? I know it defaults to /app/kibana.
FATAL { ValidationError: child "server" fails because [child "basePath" fails because ["basePath" with value "/" fails to match the start with a slash, don't end with one pattern]]
at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
at Config._commit (/usr/share/kibana/src/server/config/config.js:119:35)
at Config.set (/usr/share/kibana/src/server/config/config.js:89:10)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:62:10)
at _lodash2.default.each.child (/usr/share/kibana/src/server/config/config.js:51:14)
at arrayEach (/usr/share/kibana/node_modules/lodash/index.js:1289:13)
at Function.<anonymous> (/usr/share/kibana/node_modules/lodash/index.js:3345:13)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:50:31)
at new Config (/usr/share/kibana/src/server/config/config.js:41:10)
at Function.withDefaultSchema (/usr/share/kibana/src/server/config/config.js:34:12)
at KbnServer.exports.default (/usr/share/kibana/src/server/config/setup.js:9:37)
at KbnServer.mixin (/usr/share/kibana/src/server/kbn_server.js:136:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern',
path: 'server.basePath',
type: 'string.regex.name',
context: [Object] } ],
_object:
{ pkg:
{ version: '6.3.2',
branch: '6.3',
buildNum: 17307,
buildSha: '53d0c6758ac3fb38a3a1df198c1d4c87765e63f7' },
dev: { basePathProxyTarget: 5603 },
pid: { exclusive: false },
cpu: { cgroup: [Object] },
cpuacct: { cgroup: [Object] },
server: { name: 'kibana', host: '0', basePath: '/' } },
annotate: [Function] }
I followed this guide https://github.com/pires/kubernetes-elasticsearch-cluster
Any idea what might be the issue ?
I believe that the example config in the official kibana repository gives a hint on the cause of this problem, here's the server.basePath setting:
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
The fact that the server.basePath cannot end in a slash could mean that kibana interprets your setting as ending in a slash basically. I've not dug deeper into this though.
This error message is interesting:
message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern'
So this error message are a complement to the documentation: don't end in a slash and don't start with a slash. Something like that.
I reproduced this in minikube using your Deployment manifest but i removed the volume mount parts at the end. Changing SERVER_BASEPATH to /<SOMETHING> works fine, so basically i think you just need to set a proper basepath.

Resources