Trying to run docker on my mac os x with boot2docker.
Everything seems fine, but i cannot run docker images. I must be missing something obvious.
Guides i've used:
NodeJs Web App
Docker on Mac OS X
My docker file:
FROM ubuntu:12.04
# Build dependencies
RUN apt-get -y update
RUN apt-get install build-essential -y
RUN apt-get install curl -y
# Install NodeJS
RUN curl -L http://nodejs.org/dist/v0.10.22/node-v0.10.22.tar.gz | tar -xz
RUN cd /node-v0.10.22 && ./configure
RUN cd /node-v0.10.22 && make && make install && make clean
# Global NPM installs
RUN npm install --silent -g express lodash ejs forever
RUN mkdir /app
ADD server.js /app/server.js
ADD dist /app/dist
ADD lib /app/lib
Add test.js /app/test.js
CMD ["node", "/app/test.js"]
EXPOSE 8080
Docker build output:
#xp (master) ~/work/front-portal: docker build -t front-portal .
Uploading context 118.7 MB
Uploading context
Step 0 : FROM ubuntu:12.04
---> 9cd978db300e
Step 1 : RUN apt-get -y update
---> Using cache
---> ee9a4b864ffb
Step 2 : RUN apt-get install build-essential -y
---> Using cache
---> e7dd304d6f92
Step 3 : RUN apt-get install curl -y
---> Using cache
---> ded30df6d5c2
Step 4 : RUN curl -L http://nodejs.org/dist/v0.10.22/node-v0.10.22.tar.gz | tar -xz
---> Using cache
---> d132c9cdd09c
Step 5 : RUN cd /node-v0.10.22 && ./configure
---> Using cache
---> 9036f0ce77d2
Step 6 : RUN cd /node-v0.10.22 && make && make install && make clean
---> Using cache
---> c29bcfa1d058
Step 7 : RUN npm install --silent -g express lodash ejs forever
---> Using cache
---> d389052f5e49
Step 8 : RUN mkdir /app
---> Using cache
---> 33576951eb9b
Step 9 : ADD server.js /app/server.js
---> Using cache
---> 2a4aa2230170
Step 10 : ADD dist /app/dist
---> Using cache
---> 4350b786481c
Step 11 : ADD lib /app/lib
---> Using cache
---> 58b0a3850c01
Step 12 : Add test.js /app/test.js
---> Using cache
---> 441d63b47297
Step 13 : CMD ["node", "/app/test.js"]
---> Using cache
---> 013aaa78b0a5
Step 14 : EXPOSE 8080
---> Running in 8962747dd91a
---> 7410cc1bdbed
Successfully built 7410cc1bdbed
Contents of test.js:
var express = require('express');
// Constants
var PORT = 8080;
// App
var app = express();
app.get('/', function (req, res) {
res.send('Hello World\n');
});
app.listen(PORT)
console.log('Running on http://localhost:' + PORT);
Boot2docker is running:
g#xp (master) ~/work/front-portal: ./boot2docker status
[2014-02-12 18:32:50] boot2docker-vm is running.
g#xp (master) ~/work/front-portal:
But i cannot launch docker:
g#xp (master) ~/work/front-portal: DEBUG=1 docker run front-portal
[debug] commands.go:2484 [hijack] End of stdout
[debug] commands.go:2079 End of CmdRun(), Waiting for hijack to finish.
g#xp (master) ~/work/front-portal:
Though simple echo works:
g#xp (master) ~/work/front-portal: DEBUG=1 docker run front-portal echo "test"
test
[debug] commands.go:2484 [hijack] End of stdout
[debug] commands.go:2079 End of CmdRun(), Waiting for hijack to finish.
g#xp (master) ~/work/front-portal:
And my node test.js file is ok:
g#xp (master) ~/work/front-portal: node test.js
Running on http://localhost:8080
g#xp (master) ~/work/front-portal: DEBUG=1 docker run front-portal ls -al /app/test.js
-rw-r--r-- 1 501 dialout 232 Feb 12 2014 /app/test.js
[debug] commands.go:2484 [hijack] End of stdout
[debug] commands.go:2079 End of CmdRun(), Waiting for hijack to finish.
g#xp (master) ~/work/front-portal:
You need to explicitly enable the ports for NAT pass-through.
The "docker" port is already configured this way, as is SSH. But application-specific ports still need to be enabled.
You can use something like
VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port4000,tcp,,4000,,4000"
to add a new forwarding rule via the commandline. That example adds port 4000. Or you could use the VirtualBox GUI to do the same thing.
See also http://www.virtualbox.org/manual/ch06.html#natforward
I don't think you are passing the correct arguments to the run command.
Add a "-d" option for detached to run it in the background. And if that does not work, you can start bash on your container by running the following (note, your container can be overridden b/c you use CMD instead of ENTRYPOINT in your dockerfile. If someone has ENTRYPOINT, you won't be able to override it and start bash with the container).
docker run -i -t image_id /bin/bash
Also, you will need to do some port forwarding if you want to be able to hit the port that your service will be listening on. The expose command in your docker file will allow other containers running on your docker host to be able to hit the port, but it doesn't allow your host or the boot2docker vm to see it. I think you might be able to do a docker inspect $container_id and find the correct ip and port to hit, but I found it easier to just setup port forwarding. You will need to forward the port from the container to the vm and from the vm to your host. For the forwarding from the container to the host, use the -p option:
docker run -p :80:8080 image_name
This will forward 8080 on the container to 0.0.0.0:80 on your boot2docker-vm. To setup forwarding from the vm to your host, open up virtualbox and enter in a rule like:
open_80 0.0.0.0 :80 _blank_ :80
That rule is forwarding from the vm's port 80 to your local computer's port 80. The port forwarding can be setup from the command line (like what Andy pointed out above), but you would need to stop boot2docker.
Related
I'm using Laravel Sail and have published the Dockerfile (so I can edit it manually). The reason being that I need to use an OpenSSH server on the container itself for a program I'm using.
I am able to install OpenSSH by adding && apt-get install -y openssh-server to the packages section and configure it with the following:
RUN mkdir /var/run/sshd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
EXPOSE 22
The issue though is that it does not start when the container starts.
According to this answer you can add an ENTRYPOINT as follows:
ENTRYPOINT service ssh restart && bash
However you can only have one ENTRYPOINT and Laravel Sail has this by default:
ENTRYPOINT ["start-container"]
I have tried the following:
add /usr/sbin/sshd to the start-container script - it didn't start
add CMD ["/usr/sbin/sshd", "-D"] into the Dockerfile as per this answer - it gave a permission denied error when starting the container
As the php container uses supervisor you can add another application to the config. Reference for the config can be found here.
Example
ssh.ini:
[program:sshd]
command=/usr/sbin/sshd -D
I am trying to deploy a spring boot docker container on OpenJDK image into APP service on Azure. What baffles me is the time the web app takes on initial run (only during the initial run). I also see on the KUDU console that the container started up in less than 6 seconds but the APP service ran for more than 200 seconds and fails. Please see the attached screenshot. Has some one faced this issue before?
Edit 1: Adding the Docker File
FROM openjdk:11-jre-slim
LABEL Maintainer="Aravind"
VOLUME /tmp
ENV SSH_PASSWD "root:Docker!"
RUN echo "root:Docker!" | chpasswd
RUN apt-get update && apt-get install -y curl && apt-get install -y --no-install-recommends dialog && apt-get install -y --no-install-recommends openssh-server && apt install sudo
RUN useradd -m -d /home/spring spring && usermod -a -G spring spring
COPY sshd_config /etc/ssh/
RUN mkdir -p /tmp
COPY ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
COPY init.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000 2222
CMD ["/usr/sbin/sshd", "-D"]
USER spring:spring
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
EXPOSE 8080:8080
USER root:root
ENTRYPOINT ["init.sh"]
So after long research and help from MS support, Finally figured out the issue. As I said before, it is not related to how the container starts up as the container was starting up in less than 6 seconds.
The issue we noticed is that when the start up fails due to HTTP health-check timeout, the app is starting up with port 80 as the listening port. When it is successful, it starts up with port 8080.
Spring-Boot default listening port is 8080. The fix is to manually add the configuration for the APP service
App Setting Name: WEBSITES_PORT
Value: 8080
The above configuration seems to have fixed the issue and now the time to start is the time taken by the docker container within the app service.
I have a Ruby on Rails environment, and I'm converting it to run in Docker. This is largely because the development machine is a Windows laptop and the server is not. I have the Docker container mainly up and running, and now I want to connect the RubyMine debugger. To accomplish this the recommendation is to setup an SSH server in the container.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207649545-Use-RubyMine-and-Docker-for-development-run-and-debug-before-deployment-for-testing-
I successfully added SSHD to the container using the dockerfile lines from https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image minus the EXPOSE 22 (because it wasn't working with the port mapping in the docker-compose.yml). But the port is not accessible on the local machine
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6652389d248c civilservice_web "bundle exec rails..." 16 minutes ago Up 16 minutes 0.0.0.0:3000->3000/tcp, 0.0.0.0:3022->22/tcp civilservice_web_1
When I try to point PUTTY at localhost and 3022, it says that the server unexpectedly closed the connection.
What am I missing here?
This is my dockerfile
FROM ruby:2.2
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
nodejs \
openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
RUN mkdir /MyApp
WORKDIR /MyApp
ADD Gemfile /MyApp/Gemfile
ADD Gemfile.lock /MyApp/Gemfile.lock
RUN bundle install
ADD . /MyApp
and this is my docker-compose.yml
version: '2'
services:
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/CivilService
ports:
- "3000:3000"
- "3022:22"
DOCKER_HOST doesn't appear to be an environment variable
docker version outputs the following
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 60ccb22
Built: Thu Feb 23 10:40:59 2017
OS/Arch: windows/amd64
Server:
Version: 17.03.0-ce
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 07:52:04 2017
OS/Arch: linux/amd64
Experimental: true
docker run -it --rm --net container:civilservice_web_1 busybox netstat -lnt outputs
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
SSHD is now running along side the Rails app, but the recipe that I was working from for setting up the service is not correct for the flavor of Linux that came with my base image https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image
The image I'm using is based on Debian 8. Could someone point me at where the example breaks down?
Your sshd process isn't running. That's visible in the netstat output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
But as user2105103 points out, I should have realized that if I compared your docker-compose.yml with the Dockerfile. You define the sshd command in the image with a Dockerfile line:
CMD ["/usr/sbin/sshd", "-D"]
But then you override your image setting when running the container with the docker-compose command:
command: bundle exec rails s -p 3000 -b '0.0.0.0'
So, the only thing run, as you can see in the netstat, is the rails app listening on 3000. If you need multiple commands to run, then you can docker exec to kick off the second command (not recommended for a second service like this), use a command that launches sshd in the background and rails in the foreground (fairly ugly), or you can consider something like supervisord.
Personally, I'd skip sshd and just use docker exec -it civilservice_web_1 /bin/bash to get a prompt inside the container when you need it.
First of all, sorry for my english :-)
Second, I want to run and debug my Spring application in a docker container. The container with the application starts up without any problem, I can reach the app from a browser.
I'm developing it in IntelliJ IDEA on Linux Mint and I would like to retrieve the debug informations from my container. But when I started the app in debug mode the IDEA tells me:
Cannot retrieve debug connection: java.net.MalformedURLException: unknown protocol: unix
Here's my Dockerfile:
FROM tomcat:8-jre8
RUN apt-get update -y && apt-get install -y \
curl \
vim
RUN rm -rfd /usr/local/tomcat/webapps/ROOT
RUN mkdir -p /usr/local/tomcat/conf/Catalina/localhost
RUN echo "<Context docBase=\"/usr/local/tomcat/webapps/ROOT\" path=\"\" reloadable=\"true\" />" >> /usr/local/tomcat/conf/Catalina/localhost/ROOT.xml
ENV JPDA_ADDRESS=8000
ENV JPDA_TRANSPORT=dt_socket
ENV JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,address=8000,suspend=n,server=y
EXPOSE 8000 8080
In the run configurations the port bindigs are correct, the app deploys successfully. Could someone help me? :-)
I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app