Alias in docker's ~/.bashrc not found in jenkins pipeline - bash

I created an alias as a part of my docker image and put the command in ~/.bashrc. The alias is really simple. Just has to run the script that was installed during the image build. When I ran the docker container in my local machine, I was able to see the alias. Also in jenkins pipeline, when I ran the cat the bashrc, I was able to see the alias command. This is my dockerfile.
FROM ros:melodic-ros-core-stretch
RUN apt-get update
COPY ./scripts /scripts
RUN cat scripts/alias.sh > ~/.bashrc
RUN bash scripts/install-tools.bash
This is how install-tools.bash looks like.
#!/bin/bash
mkdir -p /opt/scripts
install --mode=755 --group=root --owner=root /scripts/pre-build.bash /opt/scripts/
rm -rf /scripts
This is how alias.sh looks like.
alias pre-build="bash /opt/scripts/pre-build.bash"
This is how the Jenkinsfile looks like.
pipeline {
agent {
docker {
args '--network host -u root:root'
image <private repo from dockerhub>
registryCredentialsId 'docker-credentials'
registryUrl 'https://registry.hub.docker.com'
}
}
stages {
stage ('Test') {
steps {
sh '''#!/bin/bash
pre-build
'''
}
}
}
post {
always {
cleanWs()
}
}
}
This is the error I'm getting from jenkins.
Started by user Automated Build Environment
Obtained Jenkinsfile from git https://github.com/<Organization>/test_pkg
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/test_pkg
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
using credential 205a054c-54a3-4cb3-9a7f-519516cc3050
Cloning the remote Git repository
Cloning repository https://github.com/<Organization>/test_pkg
> git init /var/lib/jenkins/workspace/test_pkg # timeout=10
Fetching upstream changes from https://github.com/<Organization>/test_pkg
> git --version # timeout=10
using GIT_ASKPASS to set credentials
> git fetch --tags --progress https://github.com/<Organization>/test_pkg +refs/heads/*:refs/remotes/origin/*
> git config remote.origin.url https://github.com/<Organization>/test_pkg # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/<Organization>/test_pkg # timeout=10
Fetching upstream changes from https://github.com/<Organization>/test_pkg
using GIT_ASKPASS to set credentials
> git fetch --tags --progress https://github.com/<Organization>/test_pkg +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 514613afaf7c8be82bd20e219c062f7a4b3fb733 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 514613afaf7c8be82bd20e219c062f7a4b3fb733
Commit message: "Update Jenkinsfile"
> git rev-list --no-walk 8c840617b0aed9ae50d00700192ed3573bb1c0d4 # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
$ docker login -u <private docker hub username> -p ******** https://registry.hub.docker.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /var/lib/jenkins/workspace/test_pkg#tmp/98982a2f-21bf-4460-9dab-d13f8ddba164/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . <private docker hub repo>
Error: No such object: <private docker hub repo>
[Pipeline] sh
+ docker inspect -f . registry.hub.docker.com/<private docker hub repo>
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 125:130 --network host -u root:root -w /var/lib/jenkins/workspace/test_pkg -v /var/lib/jenkins/workspace/test_pkg:/var/lib/jenkins/workspace/test_pkg:rw,z -v /var/lib/jenkins/workspace/test_pkg#tmp:/var/lib/jenkins/workspace/test_pkg#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** registry.hub.docker.com/<private docker hub repo> cat
$ docker top 657db353b9e3249c87468eafdeae64246f6b2b1ebeb1cd5ed00c618fc8f1691c -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
/var/lib/jenkins/workspace/test_pkg#tmp/durable-e7036032/script.sh: line 2: pre-build: command not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 657db353b9e3249c87468eafdeae64246f6b2b1ebeb1cd5ed00c618fc8f1691c
$ docker rm -f 657db353b9e3249c87468eafdeae64246f6b2b1ebeb1cd5ed00c618fc8f1691c
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
I'm trying to find out how to add aliases to docker image that I can access in Jenkinsfile while running a declarative pipeline.

You should broadly assume .bashrc and similar dotfiles don’t work in Docker. There are many common paths to run Docker commands that either don’t run shells at all or run them in a way that doesn’t read dotfiles:
docker run --rm myimage some command
directly runs some command without invoking a shell at all. Even if you explicitly use sh -c "some command" that won’t read shell dotfiles.
You don’t need alias declarations like the one you show, especially in Docker. Just install the script into some directory that’s already in $PATH, like /usr/local/bin.
COPY ./scripts /usr/local/bin/

You didn't run bash, you ran sh with the first command being a comment referring to /bin/bash in the following:
sh '''#!/bin/bash
pre-build
'''
If you want to run bash, try something like:
sh "/bin/bash -c 'pre-build'"
You may also need to configure bash to act like a login shell to get it to process all the normal login files like the .bashrc:
sh "/bin/bash -l -c 'pre-build'"

Related

How do I fix this Docker error in Jenkins (on Windows)?

I am trying to get into Jenkins but I am now stuck in the first step of the tutorial https://www.jenkins.io/doc/pipeline/tour/hello-world/ for hours.
I installed JDK and Docker Desktop as instructed on Windows machine. For Docker Desktop I also needed to install WSL 2 with Ubuntu.
I have created a sample GitHub Repo with only a Jenkinsfile containing the sample code from the tutorial:
/* Requires the Docker Pipeline plugin */
pipeline {
agent { docker { image 'python:3.10.7-alpine' } }
stages {
stage('build') {
steps {
sh 'python --version'
}
}
}
}
If I start a Jenkins pipeline connected to the GitHub repository and try to build it, I get the following error:
13:01:46 Connecting to https://api.github.com with no credentials, anonymous access
Obtained Jenkinsfile from 7533f199524c45c20290f15d17d103958b24abc3
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in C:\Users\wenni\.jenkins\workspace\FirstPipeline_main
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
The recommended git tool is: NONE
No credentials specified
> C:\Program Files\Git\cmd\git.exe rev-parse --resolve-git-dir C:\Users\wenni\.jenkins\workspace\FirstPipeline_main\.git # timeout=10
Fetching changes from the remote Git repository
> C:\Program Files\Git\cmd\git.exe config remote.origin.url https://github.com/Wagnerd6/JenkinsPipelineTest.git # timeout=10
Fetching without tags
Fetching upstream changes from https://github.com/Wagnerd6/JenkinsPipelineTest.git
> C:\Program Files\Git\cmd\git.exe --version # timeout=10
> git --version # 'git version 2.38.1.windows.1'
> C:\Program Files\Git\cmd\git.exe fetch --no-tags --force --progress -- https://github.com/Wagnerd6/JenkinsPipelineTest.git +refs/heads/main:refs/remotes/origin/main # timeout=10
Checking out Revision 7533f199524c45c20290f15d17d103958b24abc3 (main)
> C:\Program Files\Git\cmd\git.exe config core.sparsecheckout # timeout=10
> C:\Program Files\Git\cmd\git.exe checkout -f 7533f199524c45c20290f15d17d103958b24abc3 # timeout=10
Commit message: "Python docker img"
> C:\Program Files\Git\cmd\git.exe rev-list --no-walk e7de46ec1626389bb4e5c5577013ebaa8f214de2 # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
C:\Users\wenni\.jenkins\workspace\FirstPipeline_main>docker inspect -f . "python:3.10.7-alpine"
.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -d -t -w C:/Users/wenni/.jenkins/workspace/FirstPipeline_main/ -v C:/Users/wenni/.jenkins/workspace/FirstPipeline_main/:C:/Users/wenni/.jenkins/workspace/FirstPipeline_main/ -v C:/Users/wenni/.jenkins/workspace/FirstPipeline_main#tmp/:C:/Users/wenni/.jenkins/workspace/FirstPipeline_main#tmp/ -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** python:3.10.7-alpine cmd.exe
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.io.IOException: Failed to run image 'python:3.10.7-alpine'. Error: docker: Error response from daemon: the working directory 'C:/Users/wenni/.jenkins/workspace/FirstPipeline_main/' is invalid, it needs to be an absolute path.
See 'docker run --help'.
at org.jenkinsci.plugins.docker.workflow.client.WindowsDockerClient.run(WindowsDockerClient.java:58)
at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:200)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:322)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:47)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(Docker.groovy:140)
at org.jenkinsci.plugins.docker.workflow.Docker.node(Docker.groovy:66)
at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(Docker.groovy:125)
at org.jenkinsci.plugins.docker.workflow.declarative.DockerPipelineScript.runImage(DockerPipelineScript.groovy:54)
at org.jenkinsci.plugins.docker.workflow.declarative.AbstractDockerPipelineScript.configureRegistry(AbstractDockerPipelineScript.groovy:63)
at org.jenkinsci.plugins.docker.workflow.declarative.AbstractDockerPipelineScript.run(AbstractDockerPipelineScript.groovy:50)
at org.jenkinsci.plugins.pipeline.modeldefinition.agent.CheckoutScript.checkoutAndRun(CheckoutScript.groovy:61)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:90)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
at jdk.internal.reflect.GeneratedMethodAccessor125.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:177)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:166)
at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136)
at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:166)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:187)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:420)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:95)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:330)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:294)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:30)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:70)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Finished: FAILURE
When I replace the agent { docker { image 'python:3.10.7-alpine' } } with agent any the build works but of course fails on python --version as Python is not installed.
My research only led me to there being a problem with Windows and running the Docker containers. Is it really not possible to simply follow the first tutorial page of Jenkins on a Windows machine?
Unfortunately the agent "docker" is not compatible in a Windows hosted Linux environment.
This is a known Problem for quiet a while and any effort to make a lasting fix failed.
The main problem is the incompatibility between Windows and Linux File paths.
Hence the often reported Error:
"Error: docker: Error response from daemon: the working directory 'C:/Jenkins/workspace/testspace/' is invalid, it needs to be an absolute path."
The only way to quickly advance in the Jenkins tutorial is to use Docker CLI (Command Line) commands.
A basic example to continue the tutorial looks like this:
pipeline {
agent any
stages {
stage('build') {
steps {
script {
/* the return value gets caught and saved into the variable MY_CONTAINER */
MY_CONTAINER = bat(script: '#docker run -d -i python:3.10.7-alpine', returnStdout: true).trim()
echo "mycontainer_id is ${MY_CONTAINER}"
/* python --version gets executed inside the Container */
bat "docker exec ${MY_CONTAINER} python --version "
/* the Container gets removed */
bat "docker rm -f ${MY_CONTAINER}"
}
}
}
}
}
There are two major changes compared to the Jenkins tutorial script:
instead of sh Windows-based systems should use bat for executing batch commands.
avoiding the agent "docker" and replacing it with any

Go get "get" unexpected EOF

Thank you for visiting here.
First of all, I apologize for my bad English, maybe a little wrong, hope you can help me.
Then I had a little problem when deploying a new CI/CD system on k8s platform (v1.23.5+1) with Gitlab runner (14.9.0) and dind (docker:dind)
When deploying CI to Golang apps with private repositories at https://gitlab.domain.com, (I did the go env -w GOPRIVATE configuration), I had a problem with the go mod tidy command. Specifically getting the unexpected EOF error. I've tried go mod tidy -v but it doesn't seem to give any more info.
I did a lot of work to figure out the problem. Specifically, I have done wget and git clone commands with my private repository and they are still able to download successfully. I tried adding a private repository at https://gitlab.com in go.mod, they can still be retrieved without any errors.
And actually, without using my new runner, I can still git clone and go mod tidy in another vps.
All of this leaves me wondering where am I actually getting the error? Is it my gitlab or my k8s gitlab runner
This is runner output
go: downloading gitlab.domain.com/nood/fountain v0.0.12
unexpected EOF
Cleaning up project directory and file based variables
ERROR: Job failed: command terminated with exit code 1
This is my .gitlab-ci.yml
image: docker:latest
stages:
- build
- deploy
variables:
GTV_ECR_REPOSITORY_URL: repo.domain.com
PROJECT: nood
APP_NAME: backend-super-system
APP_NAME_ECR: backend-super-system
IMAGE_TAG: $GTV_ECR_REPOSITORY_URL/$PROJECT/$APP_NAME_ECR
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh", "--tls=false"]
build:
stage: build
allow_failure: false
script:
- echo "Building image."
- docker pull $IMAGE_TAG || echo "Building runtime from scratch"
- >
docker build
--cache-from $IMAGE_TAG
-t $IMAGE_TAG --network host .
- docker push $IMAGE_TAG
Dockerfile
FROM golang:alpine3.15
LABEL maintainer="NoodExe <nood.pr#gmail.com>"
WORKDIR /app
ENV BIN_DIR=/app/bin
RUN apk add --no-cache gcc build-base git
ADD . .
RUN chmod +x scripts/env.sh scripts/build.sh \
&& ./scripts/env.sh \
&& ./scripts/build.sh
# stage 2
FROM alpine:latest
WORKDIR /app
ENV BIN_DIR=/app/bin
ENV SCRIPTS_DIR=/app/scripts
ENV DATA_DIR=/app/data
# Build Args
ARG LOG_DIR=/var/log/nood
# Create log directory
RUN mkdir -p ${BIN_DIR} \
mkdir -p ${SCRIPTS_DIR} \
mkdir -p ${DATA_DIR} \
mkdir -p ${LOG_DIR} \
&& apk update \
&& addgroup -S nood \
&& adduser -S nood -G nood \
&& chown nood:nood /app \
&& chown nood:nood ${LOG_DIR}
USER nood
COPY --chown=nood:nood --from=0 ${BIN_DIR} /app
COPY --chown=nood:nood --from=0 ${DATA_DIR} ${DATA_DIR}
COPY --chown=nood:nood --from=0 ${SCRIPTS_DIR} ${SCRIPTS_DIR}
RUN chmod +x ${SCRIPTS_DIR}/startup.sh
ENTRYPOINT ["/app/scripts/startup.sh"]
scripts/env.sh
#!/bin/sh
go env -w GOPRIVATE=gitlab.domain.com/*
git config --global --add url."https://nood_deploy:rvbsosecret_Hizt97zQSn#gitlab.domain.com".insteadOf "https://gitlab.domain.com"
scripts/build.sh
#!/bin/sh
grep -v "replace\s.*=>.*" go.mod > tmpfile && mv tmpfile go.mod
go mod tidy
set -e
BIN_DIR=${BIN_DIR:-/app/bin}
mkdir -p "$BIN_DIR"
files=`ls *.go`
echo "****************************************"
echo "******** building applications **********"
echo "****************************************"
for file in $files; do
echo building $file
go build -o "$BIN_DIR"/${file%.go} $file
done
Thank you for still being here :3
This is a known issue with installing go modules from gitlab in nested locations. The issue describes several workarounds/solutions. One solution is described as follows:
create a gitlab Personal Access Token with at least read_api and read_repository scopes.
create a .netrc file:
machine gitlab.com
login yourname#gitlab.com
password yourpersonalaccesstoken
use go get --insecure to get your module
do not use the .gitconfig insteadOf workaround
For self-hosted instances of GitLab, there is also the additional option of using the go proxy, which is what I do to resolve this problem.
For additional context, see this answer to What's the proper way to "go get" a private repository?

Does not getting build failure status even the build not successful run(cloud-build remote builder)

Cloud-build is not showing build failure status
I created my own remote-builder which scp all files from /workspace to my Instance and running build on using gcloud compute ssh -- COMMAND
remote-builder
#!/bin/bash
USERNAME=${USERNAME:-admin}
REMOTE_WORKSPACE=${REMOTE_WORKSPACE:-/home/${USERNAME}/workspace/}
GCLOUD=${GCLOUD:-gcloud}
KEYNAME=builder-key
ssh-keygen -t rsa -N "" -f ${KEYNAME} -C ${USERNAME} || true
chmod 400 ${KEYNAME}*
cat > ssh-keys <<EOF
${USERNAME}:$(cat ${KEYNAME}.pub)
EOF
${GCLOUD} compute scp --compress --recurse \
$(pwd)/ ${USERNAME}#${INSTANCE_NAME}:${REMOTE_WORKSPACE} \
--ssh-key-file=${KEYNAME}
${GCLOUD} compute ssh --ssh-key-file=${KEYNAME} \
${USERNAME}#${INSTANCE_NAME} -- ${COMMAND}
below is the example of the code to run build(cloudbuild.yaml)
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND="docker build -t [image_name]:[tagname] -f Dockerfile ."
During docker build inside Dockerfile it got failure and show errors in log but status showing SUCCESS
can any help me how to resolve it.
Thanks in advance.
try adding
|| exit 1
at the end of your docker command... alternatively, you might just need to change the entrypoint to 'bash' and run the script manually
To confirm -- the first part was the run-on.sh script, and the second part was your cloudbuild.yaml right? I assume you trigger the build manually via UI and/or REST API?
I wrote all docker commands on bash script and add below error handling code to it.
handle_error() {
echo "FAILED: line $1, exit code $2"
exit 1
}
trap 'handle_error $LINENO $?' ERR
It works!

Jenkins docker swarm volume gets permission denied

I've got Jenkins running in a docker container, with swarm agents.
I've created a docker volume: 'maven-repository' and can get a job to access this (to cache maven). However, if two jobs run simultaneously I get a permission denied error, sample logs below:
Anyone else had this issue and managed to solve it? I've also tried mounting /var/jenkins_home/.m2:/tmp/.m2
Successful run:
Running on Jenkins in /var/jenkins_home/workspace/Alex Test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Agent Setup)
[Pipeline] sh
[Alex Test] Running shell script
+ docker pull maven:3.5.3-jdk-8-alpine
3.5.3-jdk-8-alpine: Pulling from library/maven
Digest: sha256:d8c6a5fef17ae7fcb2629e558554a085e90c722796306f31bee7fb7b9a5a123e
Status: Image is up to date for maven:3.5.3-jdk-8-alpine
[Pipeline] }
[Pipeline] // stage
[Pipeline] sh
[Alex Test] Running shell script
+ docker inspect -f . maven:3.5.3-jdk-8-alpine
.
[Pipeline] withDockerContainer
Jenkins seems to be running inside container 73c7522e3a5de318e5500a6092974cc78ab5eedf4de70b18d264aeb40e40b360
$ docker run -t -d -u 1000:1000 --privileged -v maven-repository:/tmp/.m2 -v /var/run/docker.sock:/var/run/docker.sock -w "/var/jenkins_home/workspace/Alex Test" --volumes-from 73c7522e3a5de318e5500a6092974cc78ab5eedf4de70b18d264aeb40e40b360 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** maven:3.5.3-jdk-8-alpine cat
$ docker top 9902bb5b16a8b5a0b5ee16f2a0c87ea9961fe06c895b859f96142adf388be1b9 -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Check Maven Directory)
[Pipeline] sh
[Alex Test] Running shell script
+ ls /tmp
[Pipeline] sh
[Alex Test] Running shell script
+ ls /tmp/.m2
hsperfdata_root
repository
[Pipeline] sh
[Alex Test] Running shell script
+ cat /dev/urandom
+ fold -w 32
+ head -n 1
+ tr -dc a-zA-Z0-9
+ touch /tmp/.m2/68rS0lsIiml0cIg2bJ8BooxltUqJy31X
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 9902bb5b16a8b5a0b5ee16f2a0c87ea9961fe06c895b859f96142adf388be1b9
$ docker rm -f 9902bb5b16a8b5a0b5ee16f2a0c87ea9961fe06c895b859f96142adf388be1b9
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
If a Job is run at the same time:
[Pipeline] withDockerContainer
swarm-agent seems to be running inside container 9475a72289efb2b4a0918fa5127514e81bea82b90720dd859380fe9ffc1f4d92
$ docker run -t -d -u 10000:10000 --privileged -v maven-repository:/tmp/.m2 -v /var/run/docker.sock:/var/run/docker.sock -w "/home/jenkins/agent/workspace/Alex Test#3" --volumes-from 9475a72289efb2b4a0918fa5127514e81bea82b90720dd859380fe9ffc1f4d92 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** maven:3.5.3-jdk-8-alpine cat
$ docker top b7b9793e52ae9497209b7469c936e98d157d408ad5235776e26b3c05c2a151da -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Check Maven Directory)
[Pipeline] sh
[Alex Test#3] Running shell script
+ ls /tmp
[Pipeline] sh
[Alex Test#3] Running shell script
+ ls /tmp/.m2
[Pipeline] sh
[Alex Test#3] Running shell script
+ fold -w 32
+ cat /dev/urandom
+ tr -dc a-zA-Z0-9
+ head -n 1
+ touch /tmp/.m2/8lqaDHloPUqmw6a4Wr3aky7AZdDhMIrW
touch: /tmp/.m2/8lqaDHloPUqmw6a4Wr3aky7AZdDhMIrW: Permission denied
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 b7b9793e52ae9497209b7469c936e98d157d408ad5235776e26b3c05c2a151da
$ docker rm -f b7b9793e52ae9497209b7469c936e98d157d408ad5235776e26b3c05c2a151da
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
We found that the master was running our first job which has the docker volume, however running multiple jobs caused the slave to be used which did not have the docker volume present.

Using docker image of maven with jenkins - could not create local repository

I am fairly new to Jenkins and Docker, and I am hitting a problem trying to get a docker image of maven to build my project.
My Jenkins script is fairly straightforward atm:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean install'
}
}
}
}
This produces the error "Could not create local repository at /var/empty/.m2/repository". If I specify a local repo in the command line I get the same error for whatever path I specified.
Jenkins is installed and running on a tomcat server, not in docker, which Jenkins mentions each time it runs, saying "Jenkins does not seem to be running inside a container".
The Jenkins console output looks like this:
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 123:128 -v /root/.m2:/root/.m2 -w /usr/share/tomcat7/.jenkins/workspace/MyProject -v /usr/share/tomcat7/.jenkins/workspace/MyProject:/usr/share/tomcat7/.jenkins/workspace/MyProject:rw,z -v /usr/share/tomcat7/.jenkins/workspace/MyProject#tmp:/usr/share/tomcat7/.jenkins/workspace/MyProject#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** maven:3-alpine cat
$ docker top 70c607bda0e397ed8ff79a183f33b2953ed94d82e4f36eb26dea87fa8c67adf3 -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] sh
[OMRS] Running shell script
+ mvn -B -DskipTests clean install
[ERROR] Could not create local repository at /var/empty/.m2/repository -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/LocalRepositoryNotAccessibleException
Any help would be greatly appreciated. Thanks.
instead of mounting the root/.m2 folder to the container, you probably just need to run as root user:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-u root'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean install'
}
}
}
}
To use current maven images in a jenkins pipeline as a non root user, try the following docker volume mapping and maven environment configuration:
args '-v $HOME/.m2:/var/maven/.m2:z -e MAVEN_CONFIG=/var/maven/.m2 -e MAVEN_OPTS="-Duser.home=/var/maven"'
https://github.com/carlossg/docker-maven#running-as-non-root

Resources