Run docker with aliased port and access to bash - bash

I'm trying to start a docker snapshot and connect to it via bash but also alias its port so I can access it from my local system at localhost:3333, this is what I have:
docker run -d -p 3333:3000 -t -i mysnapshot /bin/bash
However while it does start the container image it doesn't connect to it via bash
This is the output it generates:
3c86ca433d645c6c11315e89bbeaf89f072e2d1fa83213d4c4256c4a1af98322
and this is the dockerfile used to build the image:
FROM node:10
Setting working directory. All the path will be relative to WORKDIR WORKDIR /usr/src/app
Installing dependencies COPY package*.json ./ RUN npm install
Copying source files COPY . .
Building app
RUN npm run build
Running the app CMD [ "npm", "start" ]

You used -d option in docker run command, which will run the container in detached mode in the background.
Please check this out.
To get into the bash run
docker exec -it <conatiner-id> /bin/bash
where <container-id> can be retrieved from docker ps output.
Also as per your dockerfile you want npm start to be the first process in the container, so while running docker run command don't specify /bin/bash because it will override the CMD npm start mentioned in the dockerfile.
Hope this helps, let me know.

It seems you may need to overwrite your entrypoint because last line of your dockerfile mention your start command is npm start.
Also, -d detached mode is not needed.
Try this one:
docker run -it -p 3333:3000 --entrypoint=/bin/bash mysnapshot

Related

docker run doesn't run my script in /etc/profile.d/

I have my own script in /etc/profile.d/myscript.sh (mounted from the host) prepared in the container, but it fails to execute when docker run
$ docker run -it -v /etc/profile.d/myscript.sh:/etc/profile.d/myscript.sh centos7 myscript arg1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"myscript\": executable file not found in $PATH": unknown.
if docker run without any command, then I directly attach into the container, the script works.
The script is there under /etc/profile.d/, and I am able to run myscript inside the container
[root#c5f121d37ca5 /]# myscript
Usage:
myscript [arg1] [arg2]
...
[root#c5f121d37ca5 /]#
[root#c5f121d37ca5 /]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
I have no clue why my script is not executable with docker run.
my Dockerfile for the image is just basic things which does yum update, install some pkg.
Appreciate if someone please shed me some light on how to make myscript executable at docker run? or anything i need to work on the dockerfile?
I also added the path of the script in Dockerfile and re-ran but still got the same error.
ENV PATH=$PATH:/etc/profile.d/
I checked the /etc/profile.d/ is in the PATH inside the container.
$ docker run centos7 env | grep PATH
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/etc/profile.d/

docker entrypoint running bash script gets "permission denied"

I'm trying to dockerize my node.js app. When the container is built I want it to run a git clone and then start the node server. Therefore I put these operations in a .sh script. And run the script as a single command in the ENTRYPOINT:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y build-essential libssl-dev gcc curl npm git
#install gcc 4.9
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository -y ppa:ubuntu-toolchain-r/test
RUN apt-get update
RUN apt-get install -y libstdc++-4.9-dev
#install newst nodejs
RUN curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD package.json /usr/src/app/
RUN npm install
ADD docker-entrypoint.sh /usr/src/app/
EXPOSE 8080
ENTRYPOINT ["/usr/src/app/docker-entrypoint.sh"]
My docker-entrypoint.sh looks like this:
git clone git#<repo>.git
git add remote upstream git#<upstream_repo>.git
/usr/bin/node server.js
After building this image and run:
docker run --env NODE_ENV=development -p 8080:8080 -t -i <image>
I'm getting:
docker: Error response from daemon: oci runtime error: exec: "/usr/src/app/docker-entrypoint.sh": permission denied.
I shell into the container and the permission of docker-entrypoint.sh is:
-rw-r--r-- 1 root root 292 Aug 10 18:41 docker-entrypoint.sh
three questions:
Does my bash script have wrong syntax?
How do I change the permission of a bash file before adding it into an image?
What's the best way to run multiple git commands in entrypoint without using a bash script?
Thanks.
"Permission denied" prevents your script from being invoked at all. Thus, the only syntax that could be possibly pertinent is that of the first line (the "shebang"), which should look like #!/usr/bin/env bash, or #!/bin/bash, or similar depending on your target's filesystem layout.
Most likely the filesystem permissions not being set to allow execute. It's also possible that the shebang references something that isn't executable, but this is far less likely.
Mooted by the ease of repairing the prior issues.
The simple reading of
docker: Error response from daemon: oci runtime error: exec: "/usr/src/app/docker-entrypoint.sh": permission denied.
...is that the script isn't marked executable.
RUN ["chmod", "+x", "/usr/src/app/docker-entrypoint.sh"]
will address this within the container. Alternately, you can ensure that the local copy referenced by the Dockerfile is executable, and then use COPY (which is explicitly documented to retain metadata).
An executable file needs to have permissions for execute set before you can execute it.
In your machine where you are building the docker image (not inside the docker image itself) try running:
ls -la path/to/directory
The first column of the output for your executable (in this case docker-entrypoint.sh) should have the executable bits set something like:
-rwxrwxr-x
If not then try:
chmod +x docker-entrypoint.sh
and then build your docker image again.
Docker uses it's own file system but it copies everything over (including permissions bits) from the source directories.
I faced same issue & it resolved by
ENTRYPOINT ["sh", "/docker-entrypoint.sh"]
For the Dockerfile in the original question it should be like:
ENTRYPOINT ["sh", "/usr/src/app/docker-entrypoint.sh"]
The problem is due to original file not having execute permission.
Check original file has permission.
run ls -al
If result get -rw-r--r-- ,
run
chmod +x docker-entrypoint.sh
before docker build!
Remove Dot [.]
This problem take with me more than 3 hours finally, I just tried the problem was in removing dot from the end just.
problem was
docker run -p 3000:80 --rm --name test-con test-app .
/usr/local/bin/docker-entrypoint.sh: 8: exec: .: Permission denied
just remove dot from the end of your command line :
docker run -p 3000:80 --rm --name test-con test-app
Grant execution rights to the file docker-entrypoint.sh
sudo chmod 775 docker-entrypoint.sh
This is a bit stupid maybe but the error message I got was Permission denied and it sent me spiralling down in a very wrong direction to attempt to solve it. (Here for example)
I haven't even added any bash script myself, I think one is added by nodejs image which I use.
FROM node:14.9.0
I was wrongly running to expose/connect the port on my local:
docker run -p 80:80 [name] . # this is wrong!
which gives
/usr/local/bin/docker-entrypoint.sh: 8: exec: .: Permission denied
But you shouldn't even have a dot in the end, it was added to documentation of another projects docker image by misstake. You should simply run:
docker run -p 80:80 [name]
I like Docker a lot but it's sad it has so many gotchas like this and not always very clear error messages...
This is an old question asked two years prior to my answer, I am going to post what worked for me anyways.
In my working directory I have two files: Dockerfile & provision.sh
Dockerfile:
FROM centos:6.8
# put the script in the /root directory of the container
COPY provision.sh /root
# execute the script inside the container
RUN /root/provision.sh
EXPOSE 80
# Default command
CMD ["/bin/bash"]
provision.sh:
#!/usr/bin/env bash
yum upgrade
I was able to make the file in the docker container executable by setting the file outside the container as executable chmod 700 provision.sh then running docker build . .
If you do not use DockerFile, you can simply add permission as command line argument of the bash:
docker run -t <image> /bin/bash -c "chmod +x /usr/src/app/docker-entrypoint.sh; /usr/src/app/docker-entrypoint.sh"
If you still get Permission denied errors when you try to run your script in the docker's entrypoint, just try DO NOT use the shell form of the entrypoint:
Instead of:
ENTRYPOINT ./bin/watcher write ENTRYPOINT ["./bin/watcher"]:
https://docs.docker.com/engine/reference/builder/#entrypoint

Phundament under Windows - "Interactive mode is not yet supported on Windows"

I have Docker Toolbox installed under Windows 7. The Docker daemon is running inside a VM (the default behavior of Docker Toolbox).
I am trying to get Phundament running using the default tutorial.
It all works fine until I reach this command:
docker-compose run php composer install
It results in:
I've successfully attached to the running container using docker exec -it <container ID> bash but when I do a ls /app command on any of the two containers I get no files in that directory. In effect, the attempt to run composer install there fails.
I tried attaching to both containers and the result is identical.
I also noticed that behavior just recently, it's sadly a limitation of docker-compose on Windows.
For the command you mentioned you can actually run
docker-compose run -d php composer install
As general workarounds...
use docker exec -it app_php_1 bash
see also https://getcarina.com/docs/troubleshooting/troubleshooting-cannot-enable-tty-mode-on-windows/
if you don't really need an interactive shell, you could just run a command or script, like docker-compose run -d php setup.sh
Note: I need to double-check the above suggestions on a real Windows testing system.
PS: I am the author if Phundament. I've also just created an issue for this.
Please try:
winpty docker-compose run php composer install
it works for example:
winpty docker run --rm -it debian bash

'docker run -v' does not work on Windows using Docker Toolbox

When running the following command from a CoreOS VM, it works as expected:
docker run --rm -v $PWD:/data composer init
It will initialize the composer.json file in the current working directory by using the Docker volume mapping as specified. The Docker container basically has the PHP tool composer installed and will run that tool inside the /data folder of the container. By using the mapping it actually applies it on the files on the host machine.
However when trying to run this command on Windows using Docker Toolbox I get the following error.
$ docker run --rm -v $PWD:/data composer --help
invalid value "C:\\Users\\Marco;C:\\Program Files\\Git\\data" for flag -v: bad mount mode specified : \Program Files\Git\data
See 'C:\ProgramData\Chocolatey\lib\docker\bin\docker.exe run --help'.
What I notice here is although I am in Git Bash when executing the command it still uses Windows paths. So then I tried following (surround with quotes):
$ "docker run --rm -v $PWD:/data composer --help"
bash: docker run --rm -v /c/Users/Marco:/data composer --help: No such file or directory
Now it is unable to find the directory.
I also tried without the $PWD variable, but this doesn't make a difference.
How do I make this work on Windows?
This should work:
$ docker run --rm -v //c/Users/Marco:/data composer --help
Try MSYS_NO_PATHCONV=1 docker run ...
Git Bash tries to convert the path for other Windows commands.

Can I restart a docker container from within the container terminal?

I am making a Sinatra App inside a container, But whenever I want to see the changes I have to detach and run:
docker restart <container_ID>
to see the changes.
Is there any way that I could restart the docker from within to see the changes?
I cloned https://github.com/tcnksm-sample/docker-sinatra.git
Build sudo docker build -t sinatra .
Run container sudo docker run -d -p 4567:4567 sinatra
Enter the container terminal sudo docker exec -it <container_ID> bash
Changed the app.rb file but nothing changed on http://localhost:4567,
So I detach from the container and ran docker restart <container_ID> to see the changes. Since I am going to change the app.rb alot It is so inconvenient for every time I change something I have to detach and run docker restart <container_ID>
You shouldn't have to restart the all docker engine itself.
If your Dockerfile pull the changes from a repo, and redo a bundle install, as in this Dockerfile, all you need to do would be, as in this example:
# on docker server or the same machine
$ sudo docker stop container-id
$ sudo docker pull luisbebop/docker-sinatra-hello-world
$ sudo docker run -d -p 5000:5000 luisbebop/docker-sinatra-hello-world

Resources