I have a docker image (which is delivered as-is, with no Dockerfile etc.) with Ruby application in it, when I try to run docker container with docker run application_image bundle exec puma -C config/puma.rb I get starting container process caused "exec: \"bundle\": executable file not found in $PATH": unknown.. All suggested fixes for this are to specidy stuff in Dockerfile (whick is not present there). Is there away to run container this way?
There are some walk-around maybe.
Create another docker image based on the existing one, with bundle installed
Install bundle before actually running the application
It's best if you can ask the image maintainer for instructions.
If that fails try exploring the docker image first like #hmm suggested see: https://stackoverflow.com/a/58256085/5641227 on how to do that.
Then either extract the image contents if your are allowed to and build it your self from scratch.
Or try building a new image from the one you have and add new build steps to the new Dockerfile:
FROM <your-current-image:and-tag>
RUN gem install bundler -v "~>2.0.2" --no-document --quiet --force
CMD ["bundle", "exec", "puma -C config/puma.rb"]
Then just run it after you tag and build your new image:
docker run new_application_image
Also, you need to have the right version of bundler. 2.0.2 is just an example. having a conflicting version won't work.
I have Docker Toolbox installed on my local machine and I'm trying to run Ruby commands to perform database migrations. I am using the following docker commands within the Docker Toolbox Quickstart Terminal Command Line:
docker-compose run app /usr/local/bin/bundle exec rake db:migrate
docker-compose run app bundle exec rake db:create RAILS_ENV=production
docker-compose run app /usr/local/bin/bundle exec rake db:seed
However, after these commands are called, I get the following error:
Could not locate Gemfile or .bundle/ directory
Within Docker Toolbox, I am within my project's directory as I run these commands (C:\project).
After doing some research, it appears that I need to mount my Host directory somewhere inside my Home directory.
So I tried using the following Docker Mount commands:
docker run --mount /var/www/docker_example/config/containers/app.sh:/usr/local/bin
docker run --mount /var/www/docker_example/config/containers/app.sh:/c/project
Both of these commands are giving me the following error:
invalid argument "/var/www/docker_example/config/containers/app.sh:/usr/local/bin" for --mount: invalid field '/var/www/docker_example/config/containers/app.sh:/usr/local/bin' must be a key=value pair
See 'docker run --help'
Here is what I have in my docker-compose.yml file:
docker-compose.yml:
app:
build: .
command: /var/www/docker_example/config/containers/app.sh
volumes:
- C:\project:/var/www/docker_example
expose:
- "3000"
- "9312"
links:
- db
tty: true
Any help would be greatly appreciated!
The issue is because you are running on windows. You need a shared folder between your Docker machine and the Host machine.
Above is on my mac. You can see my /Users is shared as /Users inside the VM. Which means when I do
docker run -v ~/test:/test ...
It will share /Users/tarun.lalwani/test inside the VM to /test inside the container. Now since /Users inside the VM is shared to my host this would work perfectly. But if I do
docker run -v /test:/test ...
Then even if I have /test on my mac it won't be shared. Because the host mount path is dependent on the Docker host server.
So in your case you should check which folder is shared and then to what path is it shared. Assuming C:\ is shared at /c Then you would use below to get your files inside the VM
docker run -v /c/Project:/var/www/html ..
When running a Docker command such as
docker run ubuntu /bin/echo 'Hello world'
used in the in the starter example docs on the Learn by Example page of the Docker docs I see the error
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: oci runtime error: exec: "C:/Program Files/Git/usr/bin/bash": stat C:/Program Files/Git/usr/bin/bash: no such file or directory.
How can I resolve this?
This error could be caused by the setup on your system including mingw (you might see this if you have installed Git for Windows with MSYS2 for example - see here for more information). The path is being converted - to stop this you can use a double slash // before the command. In this example you can use
docker run ubuntu //bin/echo 'Hello world'
(notice the double slash (//) above). If all goes well you should now see
Hello world
An complete and slightly more complex example is starting an Ubuntu interactive shell
docker run -it -v /$(pwd)/app:/root/app ubuntu //bin/bash
Note that in my case using Git Bash I only needed one extra slash because echo $(pwd) on my machine expands to:
/c/Users/UserName/path/to/volume/mount
As another example the following can be used if zip is not available (as is the case on Windows 10 as well as Git Bash) You cannot easily zip a file for a something like an AWS Lambda function (actually there are few ways without Docker or even installing third party software if you prefer). If you want to zip the app folder under your current directory use this:
docker run -it -v /$(pwd)/app:/root/app mydockeraccount/dockerimagewithzip //usr/bin/zip -r //root/app/test1.zip //root/app
The mydockeraccount/dockerimageqithzip can be build by creating a Dockerfile like this:
FROM ubuntu
RUN apt-get update && apt-get install -y zip
Then run:
docker build -t mydockeraccount/dockerimagewithzip .
I'm trying to dockerize my node.js app. When the container is built I want it to run a git clone and then start the node server. Therefore I put these operations in a .sh script. And run the script as a single command in the ENTRYPOINT:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y build-essential libssl-dev gcc curl npm git
#install gcc 4.9
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository -y ppa:ubuntu-toolchain-r/test
RUN apt-get update
RUN apt-get install -y libstdc++-4.9-dev
#install newst nodejs
RUN curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD package.json /usr/src/app/
RUN npm install
ADD docker-entrypoint.sh /usr/src/app/
EXPOSE 8080
ENTRYPOINT ["/usr/src/app/docker-entrypoint.sh"]
My docker-entrypoint.sh looks like this:
git clone git#<repo>.git
git add remote upstream git#<upstream_repo>.git
/usr/bin/node server.js
After building this image and run:
docker run --env NODE_ENV=development -p 8080:8080 -t -i <image>
I'm getting:
docker: Error response from daemon: oci runtime error: exec: "/usr/src/app/docker-entrypoint.sh": permission denied.
I shell into the container and the permission of docker-entrypoint.sh is:
-rw-r--r-- 1 root root 292 Aug 10 18:41 docker-entrypoint.sh
three questions:
Does my bash script have wrong syntax?
How do I change the permission of a bash file before adding it into an image?
What's the best way to run multiple git commands in entrypoint without using a bash script?
Thanks.
"Permission denied" prevents your script from being invoked at all. Thus, the only syntax that could be possibly pertinent is that of the first line (the "shebang"), which should look like #!/usr/bin/env bash, or #!/bin/bash, or similar depending on your target's filesystem layout.
Most likely the filesystem permissions not being set to allow execute. It's also possible that the shebang references something that isn't executable, but this is far less likely.
Mooted by the ease of repairing the prior issues.
The simple reading of
docker: Error response from daemon: oci runtime error: exec: "/usr/src/app/docker-entrypoint.sh": permission denied.
...is that the script isn't marked executable.
RUN ["chmod", "+x", "/usr/src/app/docker-entrypoint.sh"]
will address this within the container. Alternately, you can ensure that the local copy referenced by the Dockerfile is executable, and then use COPY (which is explicitly documented to retain metadata).
An executable file needs to have permissions for execute set before you can execute it.
In your machine where you are building the docker image (not inside the docker image itself) try running:
ls -la path/to/directory
The first column of the output for your executable (in this case docker-entrypoint.sh) should have the executable bits set something like:
-rwxrwxr-x
If not then try:
chmod +x docker-entrypoint.sh
and then build your docker image again.
Docker uses it's own file system but it copies everything over (including permissions bits) from the source directories.
I faced same issue & it resolved by
ENTRYPOINT ["sh", "/docker-entrypoint.sh"]
For the Dockerfile in the original question it should be like:
ENTRYPOINT ["sh", "/usr/src/app/docker-entrypoint.sh"]
The problem is due to original file not having execute permission.
Check original file has permission.
run ls -al
If result get -rw-r--r-- ,
run
chmod +x docker-entrypoint.sh
before docker build!
Remove Dot [.]
This problem take with me more than 3 hours finally, I just tried the problem was in removing dot from the end just.
problem was
docker run -p 3000:80 --rm --name test-con test-app .
/usr/local/bin/docker-entrypoint.sh: 8: exec: .: Permission denied
just remove dot from the end of your command line :
docker run -p 3000:80 --rm --name test-con test-app
Grant execution rights to the file docker-entrypoint.sh
sudo chmod 775 docker-entrypoint.sh
This is a bit stupid maybe but the error message I got was Permission denied and it sent me spiralling down in a very wrong direction to attempt to solve it. (Here for example)
I haven't even added any bash script myself, I think one is added by nodejs image which I use.
FROM node:14.9.0
I was wrongly running to expose/connect the port on my local:
docker run -p 80:80 [name] . # this is wrong!
which gives
/usr/local/bin/docker-entrypoint.sh: 8: exec: .: Permission denied
But you shouldn't even have a dot in the end, it was added to documentation of another projects docker image by misstake. You should simply run:
docker run -p 80:80 [name]
I like Docker a lot but it's sad it has so many gotchas like this and not always very clear error messages...
This is an old question asked two years prior to my answer, I am going to post what worked for me anyways.
In my working directory I have two files: Dockerfile & provision.sh
Dockerfile:
FROM centos:6.8
# put the script in the /root directory of the container
COPY provision.sh /root
# execute the script inside the container
RUN /root/provision.sh
EXPOSE 80
# Default command
CMD ["/bin/bash"]
provision.sh:
#!/usr/bin/env bash
yum upgrade
I was able to make the file in the docker container executable by setting the file outside the container as executable chmod 700 provision.sh then running docker build . .
If you do not use DockerFile, you can simply add permission as command line argument of the bash:
docker run -t <image> /bin/bash -c "chmod +x /usr/src/app/docker-entrypoint.sh; /usr/src/app/docker-entrypoint.sh"
If you still get Permission denied errors when you try to run your script in the docker's entrypoint, just try DO NOT use the shell form of the entrypoint:
Instead of:
ENTRYPOINT ./bin/watcher write ENTRYPOINT ["./bin/watcher"]:
https://docs.docker.com/engine/reference/builder/#entrypoint
I can run the server using this command
bundle exec thin start --all /etc/thin
rvm is installed under user
How can I run it in Ubuntu on autostart?
I created config file using thin config -c ...
Updated:
ok, the problem is, I have ruby and all gems installed with RVM under user.
I want to launch standalone server (passenger, thin, doesn't matter).
I can do it under user, but I want to have autostart, how can I do it?
I think you can create bash script and then add it to autostart.
Note that you have to use full path to bundle instead of using bundle command.
#!/bin/bash
/path/to/your/bundle exec /path/to/your/thin start --all /etc/thin
You can find where your bundle installed using which bundle command.
Do not forget to make your script executable: chmod +x /path/to/script.sh