Jenkins JNLP slave is stuck on progress bar when need to run Maven job - maven

I have a problem with my Jenkins which runs on K8s.
My Pipeline is built with 2 pods - Jnlp alpine (default for k8s) and Maven (3.6.0 based on java-8:jdk-8u191-slim image).
From time to time, after staring a new build it's getting stuck with no progress with the Build.
Entering the Pod:
Jnlp - seems to be functioning as expected
Maven - no job is running (running ps -ef).
Appreciate your help.
Tried to pause / resume - not solve it.
The only way is abort and reinitiate the build.
Jenkins version - 2.164.1
My pipeline is :
properties([[$class: 'RebuildSettings', autoRebuild: false, rebuildDisabled: false],
parameters([string(defaultValue: 'master', description: '', name: 'branch', trim: false),
string(description: 'enter your namespace name', name: 'namespace', trim: false),])])
def label = "jenkins-slave-${UUID.randomUUID().toString()}"
podTemplate(label: label, namespace: "${params.namespace}", yaml: """
apiVersion: v1
kind: Pod
spec:
nodeSelector:
group: maven-ig
containers:
- name: maven
image: accontid.dkr.ecr.us-west-2.amazonaws.com/base_images:maven-base
command: ['cat']
resources:
limits:
memory: "16Gi"
cpu: "16"
requests:
memory: "10Gi"
cpu: "10"
tty: true
env:
- name: ENVIRONMENT
value: dev
- name: config.env
value: k8s
- mountPath: "/local"
name: nvme
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: nvme
hostPath:
path: /local
"""
) {
node(label) {
wrap([$class: 'TimestamperBuildWrapper']) {
checkout([$class: 'GitSCM', branches: [[name: "*/${params.branch}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'credetials', url: 'https://github.com/example/example.git']]])
wrap([$class: 'BuildUser']) {
user = env.BUILD_USER_ID
}
currentBuild.description = 'Branch: ' + "${branch}" + ' | Namespace: ' + "${user}"
stage('Stable tests') {
container('maven') {
try {
sh "find . -type f -name '*k8s.*ml' | xargs sed -i -e 's|//mysql|//mysql.${user}.svc.cluster.local|g'"
sh "mvn -f pom.xml -Dconfig.env=k8s -Dwith_stripe_stub=true -Dpolicylifecycle.integ.test.url=http://testlifecycleservice-${user}.testme.io -Dmaven.test.failure.ignore=false -Dskip.surefire.tests=true -Dmaven.test.skip=false -Dskip.package.for.deployment=true -T 1C clean verify -fn -P stg-stable-it"
}
finally {
archive "**/target/failsafe-reports/*.xml"
junit '**/target/failsafe-reports/*.xml'
}
}
}
// stage ('Delete namespace') {
// build job: 'delete-namspace', parameters: [
// [$class: 'StringParameterValue', name: 'namespace', value: "${user}"]], wait: false
// }
}
}
}

Related

Cypress intercept and wait not working on GitHub Actions

All my tests are running and passing as expected locally; however, there is one test that is failing when running on GitHub actions. The error I'm getting is:
CypressError: Timed out retrying after 60000ms:
`cy.wait()` timed out waiting `60000ms` for the 1st request to the route:
`addCustomerMutation`. No request ever occurred.
My code looks like this:
// IMPORTS
cy.intercept('POST', URL, (req) => {
delete req.headers['if-none-match'];
const addCustomerOp = 'addCustomerMutation';
const { body } = req;
if (body.hasOwnProperty('operationName' && body.operationName === addCustomerOp) {
req.alias = addCustomerOp;
req.reply({ data: addCustomer });
}
});
// FILL UP A FORM
cy.get('[data-testid="customer-save-button"]').click();
cy.wait('#addCustomerMutation', { timeout: 60000 })
.its('request.body.variables')
.should('deep.equal', {
clientId: 'client_id',
customer: {
firstName: 'test',
lastName: 'tester',
email: 'test#test.com',
phone: {
name: '',
number: '(531) 731-3151',
},
alternatePhones: [],
},
});
How can I resolve this issue? So far I tried:
reverting cypress back to 6.4 and 6.8 per: https://github.com/cypress-io/cypress/issues/3427#issuecomment-462490501
added another assertion right after wait per: https://stackoverflow.com/a/71754497/9842672
increased timeout up to 100000
None of them worked. However, the local tests ran and passes with all these options as well.
My GitHub Actions workflow look like this if it helps:
jobs:
test:
concurrency: test # so that we don't run tests for another branch at the same time
runs-on: ubuntu-latest
container: cypress/browsers
steps:
- name: Checkout code
uses: actions/checkout#v3
- name: Setup npm token
run: |
echo "//registry.npmjs.org/:_authToken=${{ secrets.NPM_TOKEN_RO }}" >> ./.npmrc
- name: Setup node
uses: actions/setup-node#v3
with:
node-version-file: '.nvmrc'
registry-url: 'https://registry.npmjs.org'
cache: npm
- name: Npm install
run: npm ci
- name: Run tests
uses: cypress-io/github-action#v5
with:
env: username=${{ secrets.CYPRESS_TEST_USER }},password=${{ secrets.CYPRESS_TEST_PASSWORD }},REACT_APP_URI=${{ secrets.REACT_APP_URI }}
install-command: | # TODO: check node modules before you npm ci here
npm ci --production=false
browser: chrome
spec: cypress/integration/*.spec.*
start: npm run start:cypress
wait-on: npx wait-on http://localhost:3000
- name: Upload test screenshots
uses: actions/upload-artifact#v3
if: failure()
with:
name: cypress-screenshots
path: cypress/screenshots
# Test run video was always captured, so this action uses "always()" condition
- name: Upload test videos
uses: actions/upload-artifact#v3
if: always()
with:
name: cypress-videos
path: cypress/videos

How to get status of Kaniko build

After doing a Kaniko build to Openshift I notice that Jenkins displays the status as Success as long as nothing is wrong with the code itself in the Jenkinsfile.
However the after the Success message the build is still going on in Openshift and it could go wrong depending on the Dockerfile contexts etc..
How could I have Jenkins wait for the status of the build in Openshift and have it display its results in the Jenkins console??
Here is my code:
#!/usr/bin/env groovy
def projectProperties = [
[$class: 'BuildDiscarderProperty',strategy: [$class: 'LogRotator', numToKeepStr: '5']]
]
node {
withVault(configuration: [timeout: 60, vaultCredentialId: 'vault-approle-prd', vaultUrl: 'https://vault.example.redacted.com'], vaultSecrets: [[path: 'secrets/kaniko', secretValues: [[vaultKey: 'ghp_key']]]])
{
sh 'echo $ghp_key > output.txt'
}
}
node {
GHP_KEY = readFile('output.txt').trim()
}
pipeline {
agent {
kubernetes {
cloud 'openshift'
idleMinutes 15
activeDeadlineSeconds 1800
yaml """
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
volumes:
- name: build-context
emptyDir: {}
- name: kaniko-secret
secret:
secretName: regcred-${NAMESPACE}
items:
- key: .dockerconfigjson
path: config.json
securityContext:
runAsUser: 0
serviceAccount: kaniko
initContainers:
- name: kaniko-init
image: ubuntu
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--context=git://${GHP_KEY}#github.com/Redacted/dockerfiles.git#refs/heads/${BRANCH}",
"--destination=image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG}",
"--dockerfile=/jenkins-slave-ansible/Dockerfile",
"--skip-tls-verify",
"--verbosity=debug"]
resources:
limits:
cpu: 1
memory: 5Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: build-context
mountPath: /kaniko/build-context
- name: kaniko-secret
mountPath: /kaniko/.docker
restartPolicy: Never
"""
}
}
parameters {
choice(name: 'NAMESPACE', choices: ['cloud', 'ce-jenkins-testing'])
string(defaultValue: 'feature/', description: 'Please enter your branch name', name: 'BRANCH')
string(defaultValue: 'jenkins-slave-ansible', description: 'Please enter your image name (e.g.: jenkins-slave-ansible)', name: 'IMAGE_NAME')
string(defaultValue: 'latest', description: 'Please add your tag (e.g.: 1.72.29)', name: 'IMAGE_TAG')
}
options {
timestamps ()
}
stages {
stage('Image Test') {
steps {
echo "Image name: ${IMAGE_NAME}"
echo "Image tag: ${IMAGE_TAG}"
}
}
}
}

Ansible handler for restarting Docker Swarm service

I need to restart containers of a Docker Swarm service with Ansible.
The basic definition looks like this:
# tasks/main.yml
- name: 'Create the service container'
docker_swarm_service:
name: 'service'
image: 'service'
networks:
- name: 'internet'
- name: 'reverse-proxy'
publish:
- { target_port: '80', published_port: '80', mode: 'ingress' }
- { target_port: '443', published_port: '443', mode: 'ingress' }
- { target_port: '8080', published_port: '8080', mode: 'ingress' }
mounts:
- { source: '{{ shared_dir }}', target: '/shared' }
replicas: 1
placement:
constraints:
- node.role == manager
restart_config:
condition: 'on-failure'
user: null
force_update: yes
So I thought that
# handlers/main.yml
- name: 'Restart Service'
docker_swarm_service:
name: some-service
image: 'some-image'
force_update: yes
should work as a handler but it seems that it's not taking all options.
So any advice how to properly restart containers of a Docker Swarm service?

Running sonarqube in jenkinsfile's containertemplate

I'm trying to build sonarqube with ContainerTemplate - I'm using this image (newtmitch/sonar-scanner).
When I run docker run -ti -v $(pwd):/usr/src newtmitch/sonar-scanner it connects to my localhost sonar (defined in sonar-project.properties), and I get logs of sonar tests.
But when I use podtemplate instead, it doesn't give me any output (only "running"). Why the container template doesn't run sonar scanner? This is the relevant part of my code:
podTemplate(label: 'jenkins-pipeline', containers: [
containerTemplate(name: 'jnlp', image: 'jenkinsci/jnlp-slave:2.62', args: '${computer.jnlpmac} ${computer.name}', workingDir: '/home/jenkins', resourceRequestCpu: '500m', resourceLimitCpu: '500m', resourceRequestMemory: '1024Mi', resourceLimitMemory: '1024Mi'),
containerTemplate(name: 'docker', image: 'docker:1.12.6', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'maven', image: 'maven:3.5.0-jdk-8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'sonar-scanner-newtmitch', image: 'newtmitch/sonar-scanner', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.3', command: 'cat', ttyEnabled: true)
],
volumes:[
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
]){
node ('jenkins-pipeline') {
container("maven"){
checkout scm
}
container("sonar-scanner-newtmitch"){
stage("run sonar scanner"){
sh "echo running"
}
}
}
I already solved this issue, but forgot the question is open. I saw that the container template override the ENTRYPOINT of sonar-image. so I had to add stage:
stage("scan"){
sh "sonar-scanner"
}

Cannot access Kibana dashboard

I am trying to deploy Kibana in my Kubernetes cluster which is on AWS. To access the Kibana dashboard I have created an ingress which is mapped to xyz.com. Here is my Kibana deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.3.2
env:
- name: CLUSTER_NAME
value: myesdb
- name: SERVER_BASEPATH
value: /
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config
Whenever I deploy it, it gives me the following error. What should my SERVER_BASEPATH be in order for it to work? I know it defaults to /app/kibana.
FATAL { ValidationError: child "server" fails because [child "basePath" fails because ["basePath" with value "/" fails to match the start with a slash, don't end with one pattern]]
at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
at Config._commit (/usr/share/kibana/src/server/config/config.js:119:35)
at Config.set (/usr/share/kibana/src/server/config/config.js:89:10)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:62:10)
at _lodash2.default.each.child (/usr/share/kibana/src/server/config/config.js:51:14)
at arrayEach (/usr/share/kibana/node_modules/lodash/index.js:1289:13)
at Function.<anonymous> (/usr/share/kibana/node_modules/lodash/index.js:3345:13)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:50:31)
at new Config (/usr/share/kibana/src/server/config/config.js:41:10)
at Function.withDefaultSchema (/usr/share/kibana/src/server/config/config.js:34:12)
at KbnServer.exports.default (/usr/share/kibana/src/server/config/setup.js:9:37)
at KbnServer.mixin (/usr/share/kibana/src/server/kbn_server.js:136:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern',
path: 'server.basePath',
type: 'string.regex.name',
context: [Object] } ],
_object:
{ pkg:
{ version: '6.3.2',
branch: '6.3',
buildNum: 17307,
buildSha: '53d0c6758ac3fb38a3a1df198c1d4c87765e63f7' },
dev: { basePathProxyTarget: 5603 },
pid: { exclusive: false },
cpu: { cgroup: [Object] },
cpuacct: { cgroup: [Object] },
server: { name: 'kibana', host: '0', basePath: '/' } },
annotate: [Function] }
I followed this guide https://github.com/pires/kubernetes-elasticsearch-cluster
Any idea what might be the issue ?
I believe that the example config in the official kibana repository gives a hint on the cause of this problem, here's the server.basePath setting:
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
The fact that the server.basePath cannot end in a slash could mean that kibana interprets your setting as ending in a slash basically. I've not dug deeper into this though.
This error message is interesting:
message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern'
So this error message are a complement to the documentation: don't end in a slash and don't start with a slash. Something like that.
I reproduced this in minikube using your Deployment manifest but i removed the volume mount parts at the end. Changing SERVER_BASEPATH to /<SOMETHING> works fine, so basically i think you just need to set a proper basepath.

Resources