I am trying to run deck commands inside docker container in jenkins pipeline but I am seeing permission issues when deck try to access file in gitrepo.So,I tried to mount the file in gitrepo to docker container so that permission issues should be eliminated.But when I try to mount I am seeing following error can someone help me on this
Command to mount file to docker container in jenkins pipeline as follows
sh "docker run -v $PWD/kong.yaml:/root/app/kong.yaml api-deck:latest"
Docker file as follows
FROM hbagdi/deck
WORKDIR /kong-configs
COPY . .
RUN deck version
CMD echo "Hello world"
ENTRYPOINT ["deck"]
Related
I am trying to copy a folder outside of a container using docker cp, but I am running in an unexpected issue: the command works perfectly outside of a shell script yet fails when running the script.
For example: copy_indices.sh
for x in "${find_container_id_arr[#]}"; do
CONTAINER_NAME="${x}"
CONTAINER_ID=$(docker ps -aqf "name=${x}")
idx_name=$(docker exec -it "$CONTAINER_ID" ls -1 /usr/share/elasticsearch/data/nodes/0/indices)
docker cp "$CONTAINER_ID":/usr/share/elasticsearch/data/nodes/0/indices/"$idx_name" "$ALL_INDICES"/"$idx_name"
done
I determine the container ID using CONTAINER_ID=$(docker ps -aqf "name=${x}"), find the name of folder I need using idx_name=$(docker exec -it "$CONTAINER_ID" ls -1 /usr/share/elasticsearch/data/nodes/0/indices) and then copy it on the host filesystem: docker cp "$CONTAINER_ID":/usr/share/elasticsearch/data/nodes/0/indices/"$idx_name" "$ALL_INDICES"/"$idx_name"
My issue is that every command evaluates and run as expected when not put inside this script. I can run the command docker cp <my_container>:/usr/share/elasticsearch/data/nodes/0/indices/<index_name> ./all_indices/<index_name> and the target folder is indeed found and copied onto the host.
Once these commands are inside a script however, I get an "Error: No such container:path:" error and I can't pinpoint what is going wrong, because the mentionned path indeed exists in the container and the container is correct as I tested it running the "final" command supposed to be executed (the docker cp one).
What could be the reason these commands suddenly stop working when put in a shell script?
I have bash script that performing some Docker commands:
#!/usr/bin/env bash
echo "Create and start database"
cd ../../database
cp -R ../../../scripts/db db/
docker build -t a_database:1 .
docker run --rm --name a_db -e POSTGRES_PASSWORD=docker -d -p 5432:5432 a_database:1
docker network connect --ip 172.23.0.5 a_network a_db
sleep 15
echo "Initialize database"
docker exec a_db /root/db/dev/init_db.sh
echo "Cleanup"
rm -rf db
On mac everything works fine, problem occurs when I try to start this script on windows machine. When I'm running it I receive an error:
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: \"C:/Program Files/Git/root/db/dev/init_db.sh\": stat C:/Program Files/Git/root/db/dev/init_db.sh: no such file or directory": unknown
Directory and script (/root/db/dev/init_db.sh) exist inside docker container. I don't know why it tries to find script on host machine? Also when I perform command:
docker exec a_db /root/db/dev/init_db.sh
directly in command line (on windows) script is executed. Any idea what is wrong and why it's trying to use git ?
I had a similar problem... absolute paths with windows variables fixed mine:
$HOME/docker/...
Thanks to igaul answer I was able to run this on windows machine. There were two problems:
Path to script in docker container. Instead of:
docker exec a_db /root/db/dev/init_db.sh
should be:
docker exec a_db root/db/dev/init_db.sh
Line endings in init_db.sh. On windows machine after pulling repository from bitbucket line ending of init_db.sh was setup to CRLF what caused problem. I've added .gitattribute file to my repo and now init_db.sh file always has LF endings.
It's not a bug in Docker, but the way mingw handles these paths. Here is some more information about that "feature"; http://www.mingw.org/wiki/Posix_path_conversion. Prefixing the path with a double slash (//bin/bash) should prevent this, or you can set MSYS_NO_PATHCONV=1, see How to stop MinGW and MSYS from mangling path names given at the command line
I want container with "centos:latest" image to be started and should execute my script. The scripts are copied with docker cp commands.
docker create --name centos1 centos:latest
docker cp . 5db38b908880:/opt ---> scripts are in current directory, hence .
docker commit centos1 new_centos1 --> now new_centos1 image has scripts
Now I want to start new container with the scripts to be executed: I tried below commands:
docker run -ti --rm --entrypoint "cd /opt && deploy_mediainfo_lambda.sh" new_centos1:latest
docker run -ti --rm new_centos1:latest "cd /opt && deploy_mediainfo_lambda.sh"
Both of above commands failed with:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"cd /opt && deploy_mediainfo_lambda.sh\": stat cd /opt && deploy_mediainfo_lambda.sh: no such file or directory": unknown.
ERRO[0000] error waiting for container: context canceled
if used bash command while starting container, I can run my script using 'execuateble path'/'execuatble name' inside container, but I can not do this while starting container on commandline
docker run -ti --rm new_centos1:latest bash
[root#c34207f3f1c4 /]# ./opt/deploy_mediainfo_lambda.sh
If used below command, which calls executable directly, it gives path error.
docker run -ti --rm new_centos1:latest "deploy_mediainfo_lambda.sh"
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"deploy_mediainfo_lambda.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
Also not sure about setting $PATH from commandline while starting the container.
I know, using Dockerfile this is achievable, like:
can set path using ENV,
can copy executables with ADD or COPY
run executables using CMD or ENTRYPOINT
How to achieves it using docker commandline?
Thanks melpomene.
Here is my bash script to automate script execution inside container, after copying them, all using docker commands.
# Start docker container
docker create --name mediainfo_docker centos:latest
# copy script files
docker cp . mediainfo_docker:/opt
# save container with the new image, which contains all scripts.
docker commit mediainfo_docker mediainfo_docker_with_scripts
# Now run scripts inside docker container
docker run -ti --rm mediainfo_docker_with_scripts:latest /opt/deploy_mediainfo_lambda.sh
Since deploy_mediainfo_lambda.sh is a script, first line of it is:
#!/bin/bash
I have created a docker image which contains the following CMD:
CMD ["sh", "start.sh"]
When I run the docker image I use the following command inside a Makefile
docker run --rm -v ${PWD}:/selenium $(DOCKER_IMAGE)
which copies the files from the current (host-)directory to the docker's /selenium folder. The files include files for selenium tests, as well as the file start.sh. But after the container has started, I get immediately the error
"sh: 0: Can't open start.sh"
Maybe the host volume is mounted inside docker after the command has been run? Anything else that can explain this error, and how to fix it?
Maybe there is a way to run more than one command inside docker to see whats going on? Like
CMD ["ls", ";", "pwd", ";", "sh", "start.sh"]
Update
when I use the following command i the Dockerfile
CMD ["ls"]
I get the error
ls: cannot open directory '.': Permission denied
Extra information
Docker version 1.12.6
Entrypoint: WORKDIR /work
Your mounting your volume to the /selenium folder in your container. Therefor the start.sh file isn't going to be in your working directory its going to be in /selenium. You want to mount your volume to a selenium folder inside your working directory then make sure the command references this new path.
If you use docker-compose the YAML-file to run the container would look something like this:
version: '3'
services:
start:
image: ${DOCKER_IMAGE}
command: sh selenium/start.sh
volumes:
- .:/work/selenium
If you try and perform each step manually, using docker run with bash,
docker exec -it (container name) /bin/bash
It will be more easier and quicker to look at the errors, and you can change the permissions, view where the file is located, before running the .sh file and try again.
Check the permission using ls -l.
Give the permission 777 using sudo chmod 777 file_name.
Repeat for other files you might find.
Input:
- There is Windows machine with Docker Toolbox installed.
- There is a shell script file baz.sh which calls py2dsc-deb.
Problem: py2dsc-deb is not available on Windows.
As I understand correctly, I can pull some Linux distro image from Docker repository, create a container and then execute shell-script file and it will run py2dsc-deb and do its job.
I have pulled:
debian - stretch-slim - 3ad21 - 3 weeks ago - 55.3MB
Now
How do I run my script using debian, something like: docker exec mycontainer /path/to/test.sh?
Running docker --rm debian:stretch-slim does nothing. Doesn't it suppose to run Debian distro at docker-machine ip?
I have tried to keep the container up using docker run -it debian:stretch-slim /bin/bash, then run the script using docker exec 1ef5b ./build.sh, but getting
$ docker exec 745 ./build.sh
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"./build.sh\": stat ./build.sh: no such file or directory"
Does it mean I can't run external script and has to always pass it inside the Docker?
You can execute bash command inside your container by typing
docker exec -ti -u `username` `container_name` bash -c "cd /path/to/ && ./test.sh"
lets say your container name is test_buildbox, you are root and your script stays inside /bin/test.sh You can call this script by typing
docker exec -ti -u root test_buildbox bash -c "cd /bin/ && ./test.sh
Please check if you have correct line endings in your .sh scripts (<LF>) when you built Docker image on Windows.