I am running an example from Docker tutorial:
docker run -d -P nginx
This starts correctly as docker ps outputs the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a5838f701c8f nginx "nginx -g 'daemon off" 3 minutes ago Up 2 minutes 0.0.0.0:32773->80/tcp, 0.0.0.0:32772->443/tcp compassionate_stallman
When I run docker inspect a5838f701c8f, I can see the IP of the container is 172.17.0.2.
However, for some reason going to localhost:32772 or 127.0.0.1:32772 or 0.0.0.0:32772 gives me ERR_CONNECTION_REFUSED. Going to 172.17.0.2:32772 just seems to endlessly load and load and never loads anything...
Could this be something with my host? I am using OSX 10.9.5 and docker 1.10.3, build 20f81dd.
You should test with URL 192.168.99.104:32772 If you are using Docker Machine.
Please take a look with https://docs.docker.com/machine/reference/ip/ to know how to get IP address with Docker Machine
Related
I'm using WSL2 with Ubuntu 20.04 distribution, and I was trying create a container in Docker with the Following command:
docker run --hostname=quickstart.cloudera --privileged=true -it -v $PWD:/src --publish-all=true -p 8888:8888 -p 8080:8080 -p 7180:7180 cloudera/quickstart /usr/bin/docker-quickstart
when I run this command, a download started with a weight of about 4.4 GB, (i think that is because because was the first time that I run this container), whe the download was over, I used the following command to check the containers docker ps -a and the status for the container is Exited (139) 6 minutes ago, when check my image list
REPOSITORY TAG IMAGE ID CREATED SIZE
uracilo/hadoop latest 902e5bb989ad 8 months ago 727MB
cloudera/quickstart latest 4239cd2958c6 4 years ago 6.34GB
I think that the image was created successfully, but when I try to run the first command, I keep gettind the Exited (139) in the status and I can't use the container
Apparently the exit code 139 refers to some problem with the system or the hardware, maybe the RAM, but I'm not sure. and I don't know if this problem is because I'm using wsl or my 8GB in ram
not enough to run the image
is there any way to run this image successfully?
You need to create a file named .wslconfig under %userprofile% folder on your Windows and copy the following lines into that file
[wsl2]
kernelCommandLine = vsyscall=emulate
Then just restart your Docker engine.
I fixed this by changing the Docker engine from WSL2 back-end to Hyper-V
https://community.cloudera.com/t5/Support-Questions/docker-exited-with-139-when-running-cloudera-quickstart/td-p/298586
I'm new using Docker. I have been trying to deploy a Linux container (with Windows as a host) with a Google Cloud image inside using Docker. I'm able to do everything well, at the end the server is running perfectly, but when I want to check the server, using the localhost in the browser, I got a blank page with:
Blank page
This is the Dockerfile:
FROM google/cloud-sdk
ENV PATH /usr/lib/google-cloud-sdk/bin:$PATH
WORKDIR docker_folder
COPY local_folder/ .
RUN pwd
EXPOSE 8080
CMD ["java_dev_appserver.sh", "."]
This is the command I'm using to build my image (in the CMD):
docker build --tag serverdeploy .
This is the command I'm using to run my container
docker run -p 8080:8080 serverdeploy
This is the stack trace that I got when I run the server
where I know that I running the server
I did some research and looks like Docker had a problem with the ports when you use a Linux container in Windows (Not sure if it's already solved or not). I've already tried all the possible solutions that I found out there (even trying to replace 'localhost' by all the ip's that I get when I run ipconfig on the cmd) but I still get the same error.
And, as last hope, I need your help to understand what I'm doing wrong, or if I missing something
You are running your service bind to localhost - that means no remote connections are accepted (as well as binding to 127.0.0.1. And for your container the host is a remote connection.
Change binding to 0.0.0.0 (which I guess is default) and enjoy.
Btw sharing your java_dev_appserver.sh would be helpful for answering the question.
I'm trying to run the Hetionet v1.0 docker container mentioned in this SO post.
I've setup a digitalocean droplet with Docker
I ran docker pull dhimmel/hetionet and it worked
Now I run docker run dhimmel/hetionet and the following happens (and never returns to the interactive shell prompt).
If that completed successfully I think the last thing I'm supposed to do is run sh ~/run-docker.sh. Furthermore nothing is live at my droplet's ip_address:7474.
The error in the screenshot above looks a lot like it could be related to some redundant #Path("/") annotation, as described in this SO post's comment, buried in the docker container but I'm not sure.
Is the output from running docker run dhimmel/hetionet supposed to hang my shell? I'm running a 2 GB Memory / 40 GB Disk Droplet on Ubuntu 16.04 with Docker 1.12.5.
Thanks for your interest in the Hetionet Docker.
The output in 3 is expected. It looks like a Docker container successfully launched, downloaded the Hetionet database, and launched the Neo4j server. I'll look into fixing the warnings, but they're not errors, as Neo4j is still launching.
For production, we use a more advanced Docker run command. Depending on your use case, you may want to use the development docker run command:
docker run \
--publish=7474:7474 \
--publish=7687:7687 \
--volume=$HOME/neo4j/hetionet-data:/data \
--volume=$HOME/neo4j/hetionet-logs:/var/lib/neo4j/logs \
dhimmel/hetionet
Both the production and development command map ports. This will make it so the Neo4j server running inside your Docker container is available at http://localhost:7474/. This is most likely what you want. If you're doing this on DigitalOcean, you would replace http://localhost with the IP address of your droplet.
For an interactive shell session in a dhimmel/hetionet container, you can use:
docker run --interactive --tty dhimmel/hetionet bash
However, that command does not launch the Neo4j server -- it just let's you explore the image.
Does this clear things up?
I installed boot2docker as explained on the docker website. Here are some command runs to show that I have things installed correctly:
$$:~ kv$ boot2docker start
Waiting for VM and Docker daemon to start...
...................ooo
Started.
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
$$:~ kv$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 14.04 b39b81afc8ca 11 days ago 188.3 MB
hello-world latest e45a5af57b00 3 weeks ago 910 B
After this, I ran the following command:
docker run -t -i ubuntu:14.04 /bin/bash
Inside the container, I installed zeromq, and started a zeromq server on port 5555 using tcp.
My questions are following:
If I exit out of the container, will it save all the work I do inside it?
I have no idea how to connect to the server running on port 5555. I read something about exposing a port, but I am not sure how to go about doing that. I did an ifconfig inside the container, and tried to connect to the server from the host like this:
$$:~ kv$ ./zmq_client tcp://container_ip:5555
This did not work. Can someone please lists the steps I need to take in order to connect to the server running within the container.
For completion sake, I am providing the list of my environment variables:
TERM_PROGRAM=Apple_Terminal
TERM=xterm-256color
SHELL=/bin/bash
TMPDIR=/var/folders/km/5kbpdx4s7cg4rmyc6d5q9l9r0000gq/T/
DOCKER_HOST=tcp://192.168.109.103:2376
Apple_PubSub_Socket_Render=/tmp/launch-1tWMHJ/Render
TERM_PROGRAM_VERSION=326
OLDPWD=/Users
TERM_SESSION_ID=262CBC8B-0A74-4B70-9F28-D9FA51FF713C
USER=kv
SSH_AUTH_SOCK=/tmp/launch-ZTWNGL/Listeners
__CF_USER_TEXT_ENCODING=0x1F7:0:0
DOCKER_TLS_VERIFY=1
__CHECKFIX1436934=1
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin
PWD=/Users/kv
DOCKER_CERT_PATH=/Users/kv/.boot2docker/certs/boot2docker-vm
HOME=/Users/kv
SHLVL=1
LOGNAME=kv
LC_CTYPE=UTF-8
DISPLAY=/tmp/launch-rco9zt/org.macosforge.xquartz:0
_=/usr/bin/env
One last question I have is about code performance. So within my Mac OS X, I have a docker container running (which runs Ubuntu). If I run the application, like a zeromq based server inside the container, will it not be slower as compared to running it on Mac OS X directly. Please explain the benefits of using docker in such a scenario..
You should really do some more reading and research before turning to SO, then ask about anything you can't figure out. But:
No. If the container is "exited" you can restart it and your files will still be there, but once it is removed your files are gone. You can use docker commit to save them to an image, but the best bet is to use a Dockerfile.
docker run -p 5000:8000 image will expose port 8000 in the container as port 5000 on the host.
Yes, it will be slower due to the boot2docker VM. It would not be slower if you were running on a Linux host. The advantage is that zeromq is now running in an isolated container with all its dependencies.
I am new to docker and nowhere near a networking expert, but I am seeing some strangeness when trying to run a docker container instance (right word?). I am running docker on OSX and set it up using the documentation found here: http://viget.com/extend/how-to-use-docker-on-os-x-the-missing-guide
Everything seems to have gone along fine, then I set up the port forwarding rules via these lines:
for i in {49000..49900};
do VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port$i,tcp,,$i,,$i";
done
I can confirm the boot2docker VM instance by checking the configuration within the Oracle VM VirtualBox Manager->Network->Adapter 1->Port Forwarding with the OSX.
I then run this command to get the container.
docker run -d -P dockerhub.emory.edu/ecoi_trunk:2
I do a "docker ps" and get this info.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f20bfefa2e97 dockerhub.emory.edu/ecoi_trunk:2 "/usr/sbin/apachectl 18 seconds ago Up 15 seconds 0.0.0.0:49153->443/tcp, 0.0.0.0:49154->80/tcp cranky_einstein
However, when I run an "lsof -i :49153" I see nothing is listening. I also can't reach the container via the "l****host:49153" in my browser. It just hangs.
What's strange is if I explicitly set the port (rather than allowing docker to assign one) via the following command:
docker run -d -p 49000:80 dockerhub.emory.edu/ecoi_trunk:2
It seems to work (lsof -i:49000 displays a TCP LISTEN), and I can confirm it's listening and the container is reachable via "l****host:49000". However, it's extremely slow. I'm not sure if either are related, but would welcome any tips or thoughts.