I am writing a Jenkins pipeline aimed to pull a docker image from a private registry. At the moment I have an issue in launching the docker pull command, since it fails when launched from Jenkins.
Currently, the Jenkins pipeline works as follow:
takes a set of parameters from user (i.e.: image name and tag)
launches a separate bash script performing the pull operation.
The bash script does two operations:
logs in the registry through the following: echo $pwd | docker login -u$username --password-stdin $registryUrl
pulls the image through the command: docker pull $image:$tag
The login command succeeds, while the pull command replies with:
Error response from daemon: access denied:
no access to Image Load, on collection swarm
At first sight I think that's a matter of user privileges, but running the bash script outside the Jenkins context (as the owner of the jenkins process) it works.
Am I missing any configuration? As an alternative, I tried to implement the Jenkins pipeline using the Docker Jenkins Plugin API, but it fails too.
Final Note
The owner of the Jenkins process is different from the user logging in Docker private registry. May this affect the behavior?
Operating System:
macOS 10.14.6
Docker Version details:
Client: Docker Engine - Community
Version: 19.03.4
API version: 1.40
Go version: go1.12.10
Git commit: 9013bf5
Built: Thu Oct 17 23:44:48 2019
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.4
API version: 1.40 (minimum version 1.12)
Go version: go1.12.10
Git commit: 9013bf5
Built: Thu Oct 17 23:50:38 2019
OS/Arch: linux/amd64
Pipelines have their own commands and tricks to login to dockerhub. For example,
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub-creds') {
docker.image('myteam/secretimage:latest').inside() {
checkout scm
sh './configure && make && make test'
}
}
Here, dockerhub-creds are credentials you should create in Jenkins itself.
This technique is better than echoing password because:
Password will be hidden in build's stdout and from other users who don't have access to credentials
Password is magically mirrored across users, build nodes, docker containers etc and you will not face situation that "execution context is different, nothing works"
Related
I am facing a problem with the Build error on circle-ci.
this is my repo
Error
Build-agent version 1.0.41563-0e4d6629 (2020-10-22T11:30:36+0000)
Docker Engine Version: 18.09.6
Kernel Version: Linux 2e2f534dcc94 4.15.0-1077-aws #81-Ubuntu SMP Wed Jun 24 16:48:15 UTC 2020 x86_64 Linux
Starting container circleci/node:6.14.8
Warning: No authentication provided, this pull may be subject to Docker Hub download rate limits.
image cache not found on this host, downloading circleci/node:6.14.8
Error response from daemon: manifest for circleci/node:6.14.8 not found
6.14.8 is a really historic version. Are you sure you didn't mean 14.6.8, 14.8.6, etc.?
In order to see what versions CircleCI supports, see pre-built CircleCI Docker images.
You can also check official node versions you can pull at https://hub.docker.com/_/node
Off-topic: you're not authenticating your pulls. Docker is applying pull rate limit as of Nov 1st, so it would be a good idea to authenticate before the pull, see Docker auth on CircleCI for a tip.
I have docker installed on Windows Server 2019 Datacenter.
This is the Docker info:
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:22:37 2019
OS/Arch: windows/amd64
Experimental: false
I would like to have docker start automatically whenever the server starts, but i consistently get this message at startup:
Service is not running
Docker Desktop service is not running, would you like to start it? Windows will ask you for elevated access.
in order to start docker, i will have to press start manually through the GUI, but i would like to automate this process.
I have already tried:
-Logging in with my account on this machine
-Put docker shortcut at shell:startup folder
Thanks.
I found that using
"net start com.docker.service" before starting the docker.exe process works.
Here's the Original Command after initial solution search on web. I tried various combinations of using the slash (/) in vain:
winpty docker run --privileged --rm -it -v '//c//temp//git//distributions//cache://root//.gradle' --mount type=bind,source=//c//temp//git//distributions,target=//distdestiny_server.in.systems.com/x86/p83-buildenv './gradlew'
C:/Program Files/Docker Toolbox/docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"./gradlew\": stat ./gradlew: no such file or directory": unknown.
After further search on web, I used bind-mounts and propogation options with various combinations, but ended up with this error:
winpty docker run --privileged --rm -it --mount type=bind,source=//c//temp//git//distributions//cache,target=//root//.gradle --mount type=bind,source=//c//temp//git//distributions,target=/dist,bind-propagation=shared destiny_server.in.systems.com/x86/p83-buildenv './gradlew'
C:/Program Files/Docker Toolbox/docker.exe: Error response from daemon: linux mounts: path /c/temp/git/distributions is mounted on / but it is not a shared mount.
Another option I came across is MountFlags. However, the docker documentation only mentions it but doesn't say where to specify it. I couldn't find a way to specify it on the cmd. lines. I find references on the web to some files such as Dockerfile, docer.service, etc. However, I am unable to find any of these files on my system. Is it necessary that I should create swarm to get these files? The only last option I have is to use MountFlags but don't know how and where to specify it.
Docker details: I have just now upgraded docker Client from 17.07.0-ce to 18.03.0-ce on my desktop as I type this content. Still I have the same issue reported.
From Docker Quickstart Terminal:
$ docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24302
Built: Fri Mar 23 08:31:36 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server: Docker Engine - Community
Engine:
Version: 18.09.3
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 774a1f4
Built: Thu Feb 28 06:40:51 2019
OS/Arch: linux/amd64
Experimental: false
Any help in this regard is much helpful. Thanks.
Using docker on windows is fraught with problems relating to paths. It looks like you're falling down one of these rabbit holes. Assuming .gradle is a directory (it is on my machine):
The way I have solved this is using docker volumes: volumes are different from the drive mapping you're currently using as the files will not appear on the host filesystem, but they are persistent across container executions, which is often sufficient. For example, I have a /git directory mounted from a volume in all my dev tools use that to get hold of source code that has been cloned from git. If you need to access these files outside docker, then a volume can be awkward, but if you only used gradle inside a container and never on the host, then you could map your .gradle directory to a volume and simply reuse it across different containers.
Before you do this though, it is a good idea to explicitly create a volume with:
docker volume create gradle
There are various techniques that will implicitly create volumes, but personally, I like to be explicit because of the principle of least surprise.
The syntax to mount the volume is similar to the one you used above, but we simply name the volume on the left-hand side of the volume switch:
docker container run -it --rm -v gradle:/root/.gradle/ ...
There may be another problem in your docker run command though. At the end, you pass the string './gradlew'. You're assuming that this is in the working directory, which could be a problem if the working directory isn't what you think it is. To ensure this doesn't happen, you should state the working directory using the -w <dir> switch. Something like:
docker container run -it --rm -w /root /v gradle:/root/.gradle/
While setting up a Windows CI pipeline from GitLab, I was going through the numerous issues related to the Windows gitlab-runner docker executor that is using an old API (1.18) which Docker no longer accepts.
The issue results in the following error messages when the Gitlab/CI tries to connect to the runner:
Running with gitlab-runner 11.2.0 (35e8515d) on Windows VS2017 x64 0825d1d7
Using Docker executor with image buildtools2017 ...
ERROR: Preparation failed: Error response from daemon: **client version 1.18 is too old.** Minimum supported API version is 1.24, please upgrade your client to a newer version (executor_docker.go:1148:0s)
The 'buildtools2017' docker image that is referred to is the Microsoft "official".
The image seems to be working and valid for the current (experimental) Docker version I'm using (18.06.1-ce-win74) and for the stable version as well.
The issue was described throughout the GitLab wiki. Andrew Leech (?) went so far as to fork and modify the runner so that it would connect properly, and kindly provided his scripts and comments in a blogpost. This seems to give some results:
C:\gitlab-runner>gitlab-runner.exe -v
Version: 10.8.0~beta.551.g67a6ccc7
Git revision: 67a6ccc7
Git branch: windows-container-executor
GO version: go1.9.4
Built: 2018-07-30T08:57:44+00:00
OS/Arch: windows/amd64
The GitLab wiki states that they're waiting until a more stable solution can be released. Currently it's been over one year of broken windows docker runners..
Andrew's blogpost and a link to his gitlab-runner.exe describes actually a different workaround using the PowerShell runner that then starts a Docker instance. All the token info is exposed, I'm not sure how to set it up, and it also seems to rely on an external image with older build tools.
It seems the docker runner now connects, but if I undestand correctly, the Gitlab-runner docker runner does not seem to agree on the 'build directory' that is used. The first Gitlab/CI scriptline in my repo is just an echo command, so the error is not about the ci script content, but I'm not sure what it IS about. If anyone with docker fu knows what is going on this would really help me.
Using Docker executor with image buildtools2017 ...
ERROR: Preparation failed: build directory needs to be absolute and non-root path
Cheers,
I have two runners: one on Linux works fine, but one on Windows has problems that I try to solve. In order to know the current state of the runner I use two commands: verify and status. When I run verify:
>gitlab-runner.exe verify
outputs:
Verifying runner... is alive ←[0;m runner←[0;m=c6xxxxxx
while status
>gitlab-runner.exe status
outputs:
gitlab-runner: Service is not running.
Question
What is a difference between being alive and running?
PS
This questions is not about why it is not running, it is about understanding the status.
gitlab-runner version
Version: 10.6.0
Git revision: a3543a27
Git branch: 10-6-stable
GO version: go1.9.4
Built: 2018-03-22T08:34:34+00:00
OS/Arch: windows/amd64
gitlab-runner. exe status tells you, if the Service is running. It seems that you've stopped it with gitlab-runner.exe stop.
gitlab-runner.exe verify tells you if your Runner ist still registered to a GitLab Instance. If you or someone else removed it in the GitLab CI/CD settings, your Runner would'nt be alive anymore.