Error in Maven goal inside Jenkins on Docker in Windows - maven

I am trying to setup jenkins on docker in my windows machine. Everything was going smooth until I configured the maven goal in Jenkins. It looks like maven is overlooking the Jenkins_home path configured while starting the docker. I used the following command during startup
docker run -p 8080:8080 -p 50000:50000 -v //D/jenkins:/var/jenkins_home Jenkins
I also tried the following
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home://D/jenkins_workspace jenkins
but I keep getting the error
[crazywebapp_dev] $ mvn clean install
FATAL: command execution failed java.io.IOException: error=2, No such
file or directory at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:247) at
java.lang.ProcessImpl.start(ProcessImpl.java:134) at
java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) Caused:
java.io.IOException: Cannot run program "mvn" (in directory
"/var/jenkins_home/workspace/crazywebapp_dev"): error=2, No such file
or directory at
java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) at
hudson.Proc$LocalProc.(Proc.java:249)
I believe it has got to do something with maven because the Jenkins workspace is getting created in my D: drive and the code is checked out successfully from bit bucket and the workspace contents shows up in Jenkins. I have also noticed that even though the workspace is created in my D: drive, Jenkins_home still shows up as /var/Jenkins_home in the Jenkins config page. Please help me figure this out.

I have also noticed that even though the workspace is created in my
D: drive, Jenkins_home still shows up as /var/Jenkins_home in the
Jenkins config page. Please help me figure this out.
From the containers perspective, there is no D: drive, the jenkins_home will always be /var/jenkins_home inside the container.
The syntax -v //D/jenkins:/var/jenkins_home means mount D:\jenkins onto /var/jenkins_home inside the container. This will effectively just replace or backup the containers jenkins home in the jenkins folder.
The syntax -v jenkins_home://D/jenkins_workspace is not useful. It means "create" a /D/jenkins_workspace directory inside the container and use a named volume called jenkins_home to backup this folder. This is not useful.
The main problem that you have, is that maven is not installed inside the jenkins container. Thus jenkins clearly can't find it. You need to configure maven to be installed. You can do that in jenkins, by going to:
Manage Jenkins > Configure System > Maven section and then configure it to install maven automatically.

Related

Mont a volume in host directoy

I am running an application with a dockerfile that I made.
I run at first my image with this command:
docker run -it -p 8501:8501 99aa9d3b7cc1
Everything works fine, but I was expecting to see a file in a specific folder of my directory of the app, which is an expected behaviour. But running with docker, seems like the application cannot write in my host directory.
Then I tried to mount a volume with this command
docker 99aa9d3b7cc1:/output .
I got this error docker: invalid reference format.
Which is the right way to persist the data that the application generates?
Use docker bind mounts.
e.g.
-v "$(pwd)"/volume:/output
The files created in /output in the container will be accessible in the volume folder relative to where the docker command has been run.

Connecting folder in a Docker container with a folder on local machine - Permission Denied

I'm new to docker and am trying to bind mount a folder in my docker container with a folder on my local machine. Using the code below, I was able to create the container with no issue.
docker run -it -v /Users/bdbot/Documents/mount_demo/:/mount_demo nycdsa/linux-toolkits bash
However, when I tried to create a txt file within the container folder, I got this error:
bash: demo.txt: Permission denied
Seeing that it was an access issue, I ran
sudo chmod 777 ../mount_demo
This allowed me to create the file, however when I checked the folder on my local machine it was not there. So the folders are not syncing.
I've also made sure the docker settings "Shared Drives" had the correct credentials. I'm not familiar enough with Docker to know how to trouble shoot further and have not been able to find anything online. I am using Windows, and everything is up to date.
The answer ended up being a really simple fix. The combination of using unix on a windows machine required that I add an additional slash(/) before the folder path. The below fixed this issue for me:
docker run -it -v //Users/bdbot/Documents/mount_demo/:/mount_demo nycdsa/linux-toolkits bash

Running jenkins on docker (on Windows)... Proper steps to run pipeline job

I'm trying to implement docker with jenkins and am not sure if I am on the right track.
Given:
Running jenkins on docker from Windows
Plan on fetching code from github, building the solution, running functional tests, etc on a container somehow
What I've currently done:
(1) Installed Docker on Windows
(2) Successfully launched Jenkins on Docker with the command
"docker run –name myJenkins -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts"
I believe this step binds the docker volume to my host machine's directory. This allows me to view and access the Jenkins content.
(3) In my host machine's Jenkins directory, I've created a plugin.txt (containing a variety of Jenkins plugins I want installed) and a Dockerfile. The Dockerfile installs the specified plugins in the plugins.txt file.
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
(4) In the windows command prompt, I built the Dockerfile with the command "docker build -t new_jenkins_image ."
(5) I stop my current container "myJenkins" and create a new container with the command "docker run –name myJenkins2 -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ new_jenkins_image". This loads up Jenkins with the newly installed jenkins plugins.
What I'm stuck/confused on
(1) Do I have to create a new container with a new name every time I want to install new jenkins plugins through the Dockerfile? This seems like a manual process as well... There has to be a better way.
(2) I started a basic jenkins pipeline job with the "Pipeline script from SCM" option. I entered in the correct repository URL and credentials but left the "Script Path" blank for now (I do not have a Jenkinsfile yet). When I execute the build, Jenkins did not fetch the code from github.
java.lang.IllegalArgumentException: Empty path not permitted.
at org.eclipse.jgit.treewalk.filter.PathFilter.create(PathFilter.java:80)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:205)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:249)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:281)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:171)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:165)
at jenkins.plugins.git.GitSCMFileSystem$3.invoke(GitSCMFileSystem.java:193)
at org.jenkinsci.plugins.gitclient.AbstractGitAPIImpl.withRepository(AbstractGitAPIImpl.java:29)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.withRepository(CliGitAPIImpl.java:72)
at jenkins.plugins.git.GitSCMFileSystem.invoke(GitSCMFileSystem.java:189)
at jenkins.plugins.git.GitSCMFile.content(GitSCMFile.java:165)
at jenkins.scm.api.SCMFile.contentAsString(SCMFile.java:338)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:110)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:67)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:293)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
I believe it's because the docker container does not have git installed? The container cannot access the Git or MSBuild from my host machine... Do I have to create a new container here to simply fetch the code?
Can someone explain to me what I'm missing or where I went wrong?
From my understanding, the process goes like this: Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
Where does the Dockerfile come into play here? Is my thought process on the right track?
you need to create container every time if you change/update the image. but it not required to give a new name each time. Did you stopped and removed previously running container? if not so docker gives errors like same name container cannot start. So stop and remove your previous container. and you will be able to start a new container with an updated image.
Yes, you need to install git in the same container to pull the code. it cannot access git on the host machine. But the error you are showing is like validation error. (i mean Jenkins validates input even before trying to pull the code. if you add some fake name it will throw next error like git not found)
Your thought is on correct track. Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
At the end of the question, you mentioned about different Dockerfile, i assume you are talking about Dockerfile in your repository (git). you can run your pipeline in docker agent. This removes to setup everything on jenkins host means you dont need to install dependencies to run your pipeline code on host, for example, if you are trying to execute some nodejs code in pipe, you need to setup nodejs on Jenkins host before you run the pipe, to get rid of this you can run pipe in container where everything is pre-setup. But I don't think you can use this feature if you are running Jenkins itself in docker. you need to setup Jenkins on the host directly in that case.

NiFi Build Errors

I'm trying to build Apache NiFi after cloning it from https://github.com/apache/nifi and it keeps failing on the tests on the nifi-standard-processors project. I opened up the output file in the surefire-reports directory and there's the below error that it can't run program "cmd" in directory /var/test, because no such file or directory exists. The first time I ran the install it didn't exist, but I created it and I still get the error message. I do a sanity check every time to make sure the directory still does exist. Does anyone have any idea what might be causing this issue? I'm only taking a very few steps to do this. They are posted below. I'm logged on as root on a CentOS Linux VM. Thanks in advance for any help.
Steps:
cd /tmp
git clone https://github.com/apache/nifi
cd nifi
mvn clean install
[main] ERROR org.apache.nifi.processors.standard.ExecuteProcess - ExecuteProcess[id=a8d6b3a3-befa-4b74-a962-330bd021ec7b] Failed to create process due to java.io.IOException: Cannot run program "cmd" (in directory "/var/test"): error=2, No such file or directory: java.io.IOException: Cannot run program "cmd" (in directory "/var/test"): error=2, No such file or directory
I believe this is due to a recent commit "solving" this ticket[1]. I actually already reopened[2] it due to failures on TravisCI and the contributor is currently working on a fix.
In order to build now, you can tell maven to "skip tests" by running the command with the proper flag: mvn clean install -Dmaven.test.skip=true
[1] https://issues.apache.org/jira/browse/NIFI-2905
[2] https://issues.apache.org/jira/browse/NIFI-2905?focusedCommentId=15603258&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15603258

Maven-Wrapper in jHipster inside Docker-Container: FileNotFoundException

I installed docker inside a vm running on lubuntu 16.4. Afterwards I pulled the container jhipster/jhipster according to this tutorial. Accessing it with docker exec -it jhipster bash works fine, also the process of creating an app via yo jhipster. But when I want to run it using the maven wrapper via ./mvnw, the following error occurs (after just under a second):
Downloading https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.3.9/apache-maven-3.3.9-bin.zip
Exception in thread "main" java.io.FileNotFoundException: /home/jhipster/.m2/wrapper/dists/apache-maven-3.3.9-bin/2609u9g41na2l7ogackmif6fj2/apache-maven-3.3.9-bin.zip.part (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.maven.wrapper.DefaultDownloader.downloadInternal(DefaultDownloader.java:69)
at org.apache.maven.wrapper.DefaultDownloader.download(DefaultDownloader.java:60)
at org.apache.maven.wrapper.Installer.createDist(Installer.java:64)
at org.apache.maven.wrapper.WrapperExecutor.execute(WrapperExecutor.java:121)
at org.apache.maven.wrapper.MavenWrapperMain.main(MavenWrapperMain.java:50)
Inside the container seems to be no maven installed, but that is what the mvnw is for, right? Anyway, it's not possible to install maven on my own (inside the container) because of lacking su rights (sudo isn't found, su works "only from terminal").
I don't get what's wrong here... Can you help?
PS: The .m2-directory is empty.
I'm assuming you mapped your maven folder in the vm to the /home/jhipster/.m2 folder in the docker container, as as per the tutorial instructions. I found that if the vm did not already have maven installed, the ~/.m2 folder in the vm was created with root owner. Not sure how or why. As a result, the jhipster user in the docker container didn't have permission to write to the /home/jhipster/.m2 folder. You should be able to fix it by changing the owner of the folder (from within the vm) to the user you are using to run docker.

Resources