I'm installing Phlex with Docker on my windows 10 PC.
I have run the command docker create --name=Phlex --net=host -v /g/phlex:/config -e HTTPPORT=5666 -e HTTPSPORT=5667 -e FASTCGIPORT=9000 -p 5666:80 -p 5667:443 --privileged digitalhigh/phlex
and the container is created.
When I start the container (docker start Phlex), it runs successfully.
However, when I try to connect to localhost:5666/5667 it refuses to connect. What am I doing wrong here? Phlex EXPOSES ports 80 and 443 and the only suspicious thing in the log is ip: either "to" is duplicate, or "224.0.0.0" is garbage and I have no idea what that means.
This is my full workflow I have done nothing else.
You need to use
docker run --name=Phlex -p 5666:5666 -p 5667:5667 -v /g/phlex:/config -e HTTPPORT=5666 -e HTTPSPORT=5667 -e FASTCGIPORT=9000 --privileged digitalhigh/phlex
When you use --net=host you should not be using port mappings. So no -p X:Y should be there. And when you want to do port mappings don't use --net=host
Also I looked at the image, it run nginx and fpm in the same image. So if you are testing phlex and or its not a core thing you work on, then you can use this image. Else you should build one of your own Dockerfile. This image is not one the optimized images as such
Related
In a ROS project, I have the following bash script that I use to run a docker container:
#!/bin/bash
source ~/catkin_ws/devel/setup.bash
rosnode kill some_ros_node
roslaunch supporting_ros_package launch_file.launch &
docker run -it \
--restart=always \
--privileged \
--net=host \
my_image:latest \
/bin/bash -c\
"
roslaunch my_package my_launch_file.launch
"
export containerId=$(docker ps -l -q)
However, what I'd like to happen is, for every time the container restarts (especially as the machine is booted up), the bash commands preceding the docker run command to also re-run on the host machine (not within the container).
How might I achieve this?
There are a few ways I can think of doing this:
Add this script to a system service. See this answer regarding adding a system service: See this
Add this script into another container that is also set to restart always ... but mount the docker socket into this other container like this: See this
When I run or start a Docker container, it will not stay running.
Docker start will just return the name of whatever container I gave it, but wont actually do anything. Docker run (ex $ docker run -p 8080:80 --name hello -d hello-world) will create it but it will exit immediately.
If I run docker ps after one of these, it will show nothing listed as currently running.
If I run docker ps -a, it will show all of my containers and show the one that I just attempted to run having exited a few seconds ago.
Is this common and how do I get my containers to stay running? I am trying to learn how to use Docker and it has been one of the worst experiences. Thank you for any help or suggestions
Docker containers are generally used to run applications/processes in an isolated environment.
When you run the hello-world image, it creates a container which has only purpose of printing out the name using standard output. That is the only process that ran and the container was done with its work. That is why you see nothing when done docker ps.
In order to keep a container running, you need to have a process inside that container that will run (for example: a server, database, application etc.)
Try creating a container form mysql image, and then check the running container.
In your command, you specify the -d flag (aka detach), which means Run container in background and print container ID (from Docker docs). See more discussion about this here: Docker container will automatically stop after "docker run -d"
docker run -p 8080:80 --name hello -d hello-world
If you run it without the -d flag, it should run in the foreground and send output to your terminal
docker run -p 8080:80 --name hello hello-world
You don't see it running in docker ps -a because that container just executes the hello-world script and exits. If the container starts a long running process then you'll be able to find it in docker ps -a. To verify this, you can try running the nginx demo containers (e.g. nginx-hello) which serve up 'hello world'/demo pages.
To know what's wrong with your container use (docker logs (your container name)) command.
then you can sort it out what went wrong with your container
Is this common and how do I get my containers to stay running?
What happen when you start a Docker container ?
By default, it executes the command/the entrypoint specified in the Dockerfile image.
Generally that command or the entrypoint is a script or a program located in the image.
When that script/program exits, the container exits too. That's all.
To keep a container alive, the script/program has to stay running.
You start an hello image container, a "hello" container says "hello" and exits.
That may be a script as simple as :
#!/bin/sh
echo "hello"
So that is expected to finish and exit the container.
Run a database or a web server and you will see a different behavior. The script/program keeps running... while you don't stop that. So the container also stays running while you don't stop that.
To experiment, you can run your hello-world container with an endless command :
docker run -p 8080:80 --name hello -d hello-world --entrypoint tail -f /dev/null
You will see that the container stays running.
A docker container exits when its main process finishes. The hello-world main process just prints some text and exits, so container exits too.
You can run this command straightly to see it's text:
docker run hello-world
If you want a running container, maybe you can try a nginx demo:
docker run --name nginx-demo -p 8080:80 -d nginx
then you can visit http://localhost:8080 using your web browser.
I'm trying to automate deployment with webhooks to the Docker hub based on this tutorial. One container runs the web app on port 80. On the same host I run another container that listens for post requests from the docker hub, triggering the host to update the webapp image. The post request triggers a bash script that looks like this:
echo pulling...
docker pull my_username/image
docker stop img
docker rm img
docker run --name img -d -p 80:80 my_username/image
A test payload succesfully triggers the script. However, the container logs the following complaints:
pulling...
app/deploy.sh: line 4: docker: command not found
...
app/deploy.sh: line 7: docker: command not found
It seems that the bash script does not access the host implicitly. How to proceed?
Stuff I tried but did not work:
when firing up the listener container I added the host IP like this based on the docs:
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print\$2}' | cut -d / -f 1`
docker run --name listener --add-host=docker:${HOSTIP} -e TOKEN="test654321" -d -p 5000:5000 mjhea0/docker-hook-listener
similarly, I substituted the --add-host command with --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }') based on this suggestion.
Neither the docker binary nor the docker socket will be present in a container by default (why would it?).
You can solve this fairly easily by mounting the binary and socket from the host when you start the container e.g:
$ docker run -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock debian docker --version
Docker version 1.7.0, build 0baf609
You seem to be a bit confused about how Docker works; I'm not sure exactly what you mean by "access the host implicitly" or how you think it would work. Think of a container as a isolated and ephemeral machine, completely separate from your host, something like a fast VM.
I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.
I want to ssh or bash into runned docker container. Please, see example:
$ sudo docker run -d webserver
webserver is clean image from ubuntu:14.04
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
665b4a1e17b6 webserver:latest /bin/bash ... ... 22/tcp, 80/tcp loving_heisenberg
now I want to get something like this (go into runned container):
$ sudo docker run -t -i webserver (or maybe 665b4a1e17b6 instead)
$ root#665b4a1e17b6:/#
Previously I used Vagrant so I want to get behavior similar to vagrant ssh. Please, could anyone help me?
After the release of Docker version 1.3, the correct way to get a shell or other process on a running container is using the docker exec command. For example, you would run the following to get a shell on a running container:
docker exec -it myContainer /bin/bash
You can find more information in the documentation.
The answer is docker attach command.
For information see: https://askubuntu.com/a/507009/159189