JIB is not able to detect docker credentials - spring

I am building a generated app whith Jhipster.
I run the command to build the images and run the app containerized. I started Docker Desktop on windows 11.
To remind, this is the command: ./gradlew -Pprod bootJar jib
The output after a while is :
Execution failed for task ':jib'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Build image failed, perhaps you should make sure your credentials for 'registr
y-1.docker.io/library/app2' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help
I tried multiple times to log in on docker:
docker login registry-1.docker.io
The login is successful and the config.json of docker content is:
{
"auths": {
"https://index.docker.io/v1/": {},
"registry-1.docker.io": {}
},
"credsStore": "desktop"
}
I'm sure that this is where JIB, by default looks for docker creds, but I can not see any creds here. It looks like the credentials are stored somewhere else, here is the version of Docker: Docker version 20.10.17, build 100c701

Maybe try building offline first, probably a permission issue on the remote repository, which will need to be fixed on hub.docker.com

Related

Docker manifest unknown: Gitlab repository: Docker Desktop

I've been given a Docker image stored in the gitlab container registry, registry.gitlab.com.
I have a gitlab acount, with a password, and I am able to do a docker login:
docker login registry.gitlab.com
After I do that, I no longer get an authentication error when I try to do a docker command against that registry.
And the documentation for using that registry seems clear:
Go to your project or group’s Packages and registries > Container Registry and find the image you want.
Next to the image name, select Copy.
Use docker run with the image link:
docker run [options] registry.example.com/group/project/image [arguments]
https://docs.gitlab.com/ee/user/packages/container_registry/
But when I run any kind of docker command with the group/project/image I just copied, I just get the "manifest unknown" docker error, which normally indicates that the image is missing or mis-spelled.
So, maybe gitlab is broken, or maybe the gitlab documentation is wrong, or maybe there is something wrong with that particular image, or maybe it doesn't work using docker on WSL through Docker Desktop on Win10, or maybe ... I just haven't set up something correctly.
FWIW, Docker Desktop is a Windows service/application that proxies 'docker' commands in Windows, sending them to a docker instance running on WSL. It's normally transparent. It maintains a local registry, and seems to have some way of connecting to docker hub, but I've never used it with any other registry.
I'd like to pull that image into my local registry. What should I do different?

Unable to find docker image locally

I was following this post - the reference code is on GitHub. I have cloned the repository on my local.
The project has got a react app inside it. I'm trying to run it on my local following step 7 on the same post:
docker run -p 8080:80 shakyshane/cra-docker
This returns:
Unable to find image 'shakyshane/cra-docker:latest' locally
docker: Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'.
See 'docker run --help'.
I tried login to docker again but looks like since it belongs to #shakyShane I cannot access it.
I idiotically tried npm start too but it's not a simple react app running on node - it's in the container and containers are not controlled by npm
Looks like docker pull shakyshane/cra-docker:latest throws this:
Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'
So the question is how do I run this docker image on my local mac machine?
Well this is illogical but still sharing so future people like me don't get stuck.
The problem was that I was trying to run a docker image which doesn't exist.
I needed to build the image:
docker build . -t xameeramir/cra-docker
And then run it:
docker run -p 8080:80 xameeramir/cra-docker
In my case, my image had TAG specified with it and I was not using it.
REPOSITORY TAG IMAGE ID CREATED SIZE
testimage testtag 189b7354c60a 13 hours ago 88.3MB
Unable to find image 'testimage:latest' locally for this command docker run testimage
So specifying tag like this - docker run testimage:testtag worked for me
Posting my solution since non of the above worked.
Working on macbook M1 pro.
The issue I had is that the image was built as arm/64. And I was running the command:
docker run --platform=linux/amd64 ...
So I had to build the image for amd/64 platform in order to run it.
Command below:
docker buildx build --platform=linux/amd64 ...
In conclusion your docker image platform and docker run platform needs to be the same from what I experienced.
In my case, the docker image did exist on the system and still I couldn't run the container locally, so I used the exact image ID instead of image name and tag, like this:
docker run myContainer c29150c8588e
I received this error message when I typed the name/character wrong. That is, "name1\name2" instead of "name1/name2" (wrong slash).
In my case, I saw this error when I had logged in to the dockerhub in my docker desktop. The repo I was pulling was local to my enterprise. Once i logged out of dockerhub, the pull worked.
This just happened to me because my local docker vm on macos ran out of disk space.
I just deleted some old images using docker image prune and it started working correctly again.
shakyshane/cra-docker Does not exist in that user's repo https://hub.docker.com/u/shakyshane/
The problem is you are trying to run an imagen that does not exists. If you are executing a Dockerfile, the image was not created until Dockerfile pass with no errors; so when Dockerfile tries to run the image, it can't find it. Be sure you have no errors in the execution of your scripts.
The simplest answer can be the correct one!.. make sure you have permissions to execute the command, use:
sudo docker run -p 8080:80 shakyshane/cra-docker
In my case, I didn't realise there was a difference between docker run and docker start, and I kept using the run command when I should've been using the start command.
FYI, run is for building and creating the docker container, start is to just start a stopped container
Use -d
sudo docker run -d -p 8000:8000 rasa/duckling
learn about -d here
sudo docker run --help
At first, i build image on mac-m1-pro with this command docker build -t hello_k8s_world:0.0.1 ., when is run this image the issue appear.
After read Master Yi's answer, i realize the crux of the matter and rebuild my images like this docker build --platform=arm64 -t hello_k8s_world:0.0.1 .
Finally,it worked.

Running jenkins on docker (on Windows)... Proper steps to run pipeline job

I'm trying to implement docker with jenkins and am not sure if I am on the right track.
Given:
Running jenkins on docker from Windows
Plan on fetching code from github, building the solution, running functional tests, etc on a container somehow
What I've currently done:
(1) Installed Docker on Windows
(2) Successfully launched Jenkins on Docker with the command
"docker run –name myJenkins -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts"
I believe this step binds the docker volume to my host machine's directory. This allows me to view and access the Jenkins content.
(3) In my host machine's Jenkins directory, I've created a plugin.txt (containing a variety of Jenkins plugins I want installed) and a Dockerfile. The Dockerfile installs the specified plugins in the plugins.txt file.
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
(4) In the windows command prompt, I built the Dockerfile with the command "docker build -t new_jenkins_image ."
(5) I stop my current container "myJenkins" and create a new container with the command "docker run –name myJenkins2 -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ new_jenkins_image". This loads up Jenkins with the newly installed jenkins plugins.
What I'm stuck/confused on
(1) Do I have to create a new container with a new name every time I want to install new jenkins plugins through the Dockerfile? This seems like a manual process as well... There has to be a better way.
(2) I started a basic jenkins pipeline job with the "Pipeline script from SCM" option. I entered in the correct repository URL and credentials but left the "Script Path" blank for now (I do not have a Jenkinsfile yet). When I execute the build, Jenkins did not fetch the code from github.
java.lang.IllegalArgumentException: Empty path not permitted.
at org.eclipse.jgit.treewalk.filter.PathFilter.create(PathFilter.java:80)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:205)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:249)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:281)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:171)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:165)
at jenkins.plugins.git.GitSCMFileSystem$3.invoke(GitSCMFileSystem.java:193)
at org.jenkinsci.plugins.gitclient.AbstractGitAPIImpl.withRepository(AbstractGitAPIImpl.java:29)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.withRepository(CliGitAPIImpl.java:72)
at jenkins.plugins.git.GitSCMFileSystem.invoke(GitSCMFileSystem.java:189)
at jenkins.plugins.git.GitSCMFile.content(GitSCMFile.java:165)
at jenkins.scm.api.SCMFile.contentAsString(SCMFile.java:338)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:110)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:67)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:293)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
I believe it's because the docker container does not have git installed? The container cannot access the Git or MSBuild from my host machine... Do I have to create a new container here to simply fetch the code?
Can someone explain to me what I'm missing or where I went wrong?
From my understanding, the process goes like this: Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
Where does the Dockerfile come into play here? Is my thought process on the right track?
you need to create container every time if you change/update the image. but it not required to give a new name each time. Did you stopped and removed previously running container? if not so docker gives errors like same name container cannot start. So stop and remove your previous container. and you will be able to start a new container with an updated image.
Yes, you need to install git in the same container to pull the code. it cannot access git on the host machine. But the error you are showing is like validation error. (i mean Jenkins validates input even before trying to pull the code. if you add some fake name it will throw next error like git not found)
Your thought is on correct track. Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
At the end of the question, you mentioned about different Dockerfile, i assume you are talking about Dockerfile in your repository (git). you can run your pipeline in docker agent. This removes to setup everything on jenkins host means you dont need to install dependencies to run your pipeline code on host, for example, if you are trying to execute some nodejs code in pipe, you need to setup nodejs on Jenkins host before you run the pipe, to get rid of this you can run pipe in container where everything is pre-setup. But I don't think you can use this feature if you are running Jenkins itself in docker. you need to setup Jenkins on the host directly in that case.

Unable to push image to GCR from Jenkins Pipeline

I am running a VM in google cloud that runs a Jenkins server (within a docker container). I am trying to build a Docker image for my application and push it out to Google Container Registry using Jenkins pipeline.
I installed all the required Jenkins plugins:
Google OAuth Credentials Plugin,
Docker Pipeline Plugin,
Google Container Registry Auth Plugin
Created a service account + key with Storage Admin and Object Viewer roles. Downloaded the json file.
Created a credential in Jenkins using the google project name as the id and the json key.
My pipeline code for build looks like this:
stage('Build Image') {
app = docker.build("<gcp-project-id>/<myproject>")
}
My pipeline code for build looks like this:
stage('Push Image') {
docker.withRegistry('https://us.gcr.io', 'gcr:<gcp-project-id>') {
app.push("${commit_id}")
app.push("latest")
}
}
However, the build fails at the last step with this error:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I have spent several hours trying to figure this out. Any help would be greatly appreciated!
Create a service account in GCP with permission to push image and then copy the credential json fie and save it as credentials inside Jenkins; call in the credentials id inside your pipeline like below and it should push images to gcr
withCredentials([file(credentialsId: 'gcr', variable: 'GC_KEY')]){
sh "cat '$GC_KEY' | docker login -u _json_key --password-stdin https://eu.gcr.io"
sh "gcloud auth activate-service-account --key-file='$GC_KEY'"
sh "gcloud auth configure-docker"
GLOUD_AUTH = sh (
script: 'gcloud auth print-access-token',
returnStdout: true
).trim()
echo "Pushing image To GCR"
sh "docker push eu.gcr.io/${google_projectname}/${image_name}:${image-tag}"
}
Additionally i have defined some variables used above
I have an identical problem. I found out that Jenkins doesn't seem to use those credentials: Under usage it says 'This credential has not been recorded as used anywhere.' . When used with gcloud util, the service account and key work fine, so the problem is somewhere in Jenkins.

How can I properly configure a gcloud account for my Gradle Docker plugin when using GCR?

Our containers are hosted using Google Container Registry, and I am using id "com.bmuschko.docker-java-application" version "3.0.7" to build and deploy docker containers. However, I run into permission issues whenever I try to pull the base image or push the image to GCR (I am able to get to the latter step by pulling the image and having it available locally).
I'm a little bit confused by how I can properly configure a particular GCloud account to be used whenever issuing any Docker related calls over a wire using the plugin.
As a first attempt, I've tried to create a task that precedes and build or push commands:
task gcloudLogin(type:Exec) {
executable "gcloud"
args "auth", "activate-service-account", "--key-file", "$System.env.KEY_FILE"
}
However, this simple wrapper doesn't work as desired. Is there currently a supported way to have this plugin work with GCR?
Got in touch with the maintainers of the gradle docker plugin and we have found this to be a valid solution.

Resources