How to connect to SSHD inside a Docker container from Windows? - windows

I have a Ruby on Rails environment, and I'm converting it to run in Docker. This is largely because the development machine is a Windows laptop and the server is not. I have the Docker container mainly up and running, and now I want to connect the RubyMine debugger. To accomplish this the recommendation is to setup an SSH server in the container.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207649545-Use-RubyMine-and-Docker-for-development-run-and-debug-before-deployment-for-testing-
I successfully added SSHD to the container using the dockerfile lines from https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image minus the EXPOSE 22 (because it wasn't working with the port mapping in the docker-compose.yml). But the port is not accessible on the local machine
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6652389d248c civilservice_web "bundle exec rails..." 16 minutes ago Up 16 minutes 0.0.0.0:3000->3000/tcp, 0.0.0.0:3022->22/tcp civilservice_web_1
When I try to point PUTTY at localhost and 3022, it says that the server unexpectedly closed the connection.
What am I missing here?
This is my dockerfile
FROM ruby:2.2
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
nodejs \
openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
RUN mkdir /MyApp
WORKDIR /MyApp
ADD Gemfile /MyApp/Gemfile
ADD Gemfile.lock /MyApp/Gemfile.lock
RUN bundle install
ADD . /MyApp
and this is my docker-compose.yml
version: '2'
services:
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/CivilService
ports:
- "3000:3000"
- "3022:22"
DOCKER_HOST doesn't appear to be an environment variable
docker version outputs the following
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 60ccb22
Built: Thu Feb 23 10:40:59 2017
OS/Arch: windows/amd64
Server:
Version: 17.03.0-ce
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 07:52:04 2017
OS/Arch: linux/amd64
Experimental: true
docker run -it --rm --net container:civilservice_web_1 busybox netstat -lnt outputs
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
SSHD is now running along side the Rails app, but the recipe that I was working from for setting up the service is not correct for the flavor of Linux that came with my base image https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image
The image I'm using is based on Debian 8. Could someone point me at where the example breaks down?

Your sshd process isn't running. That's visible in the netstat output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
But as user2105103 points out, I should have realized that if I compared your docker-compose.yml with the Dockerfile. You define the sshd command in the image with a Dockerfile line:
CMD ["/usr/sbin/sshd", "-D"]
But then you override your image setting when running the container with the docker-compose command:
command: bundle exec rails s -p 3000 -b '0.0.0.0'
So, the only thing run, as you can see in the netstat, is the rails app listening on 3000. If you need multiple commands to run, then you can docker exec to kick off the second command (not recommended for a second service like this), use a command that launches sshd in the background and rails in the foreground (fairly ugly), or you can consider something like supervisord.
Personally, I'd skip sshd and just use docker exec -it civilservice_web_1 /bin/bash to get a prompt inside the container when you need it.

Related

Docker / Postgres - Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use [duplicate]

When I run docker-compose up in my Docker project it fails with the following message:
Error starting userland proxy: listen tcp 0.0.0.0:3000: bind: address already in use
netstat -pna | grep 3000
shows this:
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN -
I've already tried docker-compose down, but it doesn't help.
In your case it was some other process that was using the port and as indicated in the comments, sudo netstat -pna | grep 3000 helped you in solving the problem.
While in other cases (I myself encountered it many times) it mostly is the same container running at some other instance. In that case docker ps was very helpful as often I left the same containers running in other directories and then tried running again at other places, where same container names were used.
How docker ps helped me:
docker rm -f $(docker ps -aq) is a short command which I use to remove all containers.
Edit: Added how docker ps helped me.
This helped me:
docker-compose down # Stop container on current dir if there is a docker-compose.yml
docker rm -fv $(docker ps -aq) # Remove all containers
sudo lsof -i -P -n | grep <port number> # List who's using the port
and then:
kill -9 <process id> (macOS) or sudo kill <process id> (Linux).
Source: comment by user Rub21.
I had the same problem. I fixed this by stopping the Apache2 service on my host.
You can kill the process listening on that port easily with one command below :
kill -9 $(lsof -t -i tcp:<port#>)
ex :
kill -9 $(lsof -t -i tcp:<port#>)
or for ubuntu:
sudo kill -9 `sudo lsof -t -i:8000`
Man page for lsof : https://man7.org/linux/man-pages/man8/lsof.8.html
-9 is for hard kill without checking any deps.
(Not related, but might be useful if its PORT 5000 mystery) - the culprit process is due to Mac OS monterery.
The port 5000 is commonly used to serve local development servers. When updating to the latest macOS operating system, I was unable the docker to bind to port 5000, because it was already in use. (You may find a message along the lines of Port 5000 already in use.)
By running lsof -i :5000, I found out the process using the port was named ControlCenter, which is a native macOS application. If this is happening to you, even if you use brute force (and kill) the application, it will restart itself. In my laptop, lsof -i :5000 returns that Control Center is being used by process id 433. I could do killall -p 433, but macOS keeps restarting the process.
The process running on this port turns out to be an AirPlay server. You can deactivate it in
System Preferences › Sharing, and unchecking AirPlay Receiver to release port 5000.
I had same problem,
docker-compose down --rmi all (in the same directory where you run docker-compose up)
helps
UPD: CAUTION - this will also delete the local docker images you've pulled (from comment)
For Linux/Unix:
Simple search for linux utility using following command
netstat -nlp | grep 8888
It'll show processing running at this port, then kill that process using PID (look for a PID in row) of that process.
kill PID
In some cases it is critical to perform a more in-depth debugging to the problem before stopping a container or killing a process.
Consider following the checklist below:
1) Check you current docker compose environment
Run docker-compose ps. If port is in use by another container, stop it with docker-compose stop <service-name-in-compose-file> or remove it by replacing stop with rm.
2) Check the containers running outside your current workspace
Run docker ps to see list of all containers running under your host.
If you find the port is in use by another container, you can stop it with docker stop <container-id>.
(*) Because you're not under the scope of the origin compose environment - it is a good practice first to use docker inspect to gather more information about the container that you're about to stop.
3) Check if port is used by other processes running on the host
For example if the port is 6379 run:
$ sudo netstat -ltnp | grep ':6379'
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 915/redis-server 12
tcp6 0 0 ::1:6379 :::* LISTEN 915/redis-server 12
(*) You can also use the lsof command which is mainly used to retrieve information about files that are opened by various processes (I suggest running netstat before that).
So, In case of the output above the PID is 915. Now you can run:
$ ps j 915
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 915 915 915 ? -1 Ssl 123 0:11 /usr/bin/redis-server 127.0.0.1:6379
And see the ID of the parent process (PPID) and the execution command.
You can also run: $ pstree -s <PID> to a visual display of the process and its related processes.
In our case we can see that the process probably is a daemon (PPID is 1) - In that case consider running: A) $ cat /proc/<PID>/status in order to get a more in-depth information about the process like the number of threads spawned by the process, its capabilities, etc'.
B) $ systemctl status <PID> in order to see the systemd unit that caused the creation of a specific process. If the service is not critical - you can stop and disable the service.
4) Restart Docker service
Run: sudo service docker restart.
5) You reached this point and..
Only if its not placing your system at risk - consider restarting the server.
In my case it was
Error starting userland proxy: listen tcp 0.0.0.0:9000: bind: address already in use
And all that I need is turn off debug listening in php storm
Most probably this is because you are already running a web server on your host OS, so it conflicts with the web server that Docker is attempting to start.
So try this one-liner before trying anything else:
sudo service apache2 stop; sudo service nginx stop; sudo nginx -s stop;
I had apache running on my ubuntu machine. I used this command to kill it!
sudo /etc/init.d/apache2 stop
I was getting the below error when i was trying to launch a new container -
listen tcp 0.0.0.0:8080: bind: address already in use.
To check which process is running on port 8080, run below command:
netstat -tulnp | grep 8080
i got the output below
[root#ip-112-x6x-2x-xxx.xxxxx.compute.internal (aws_main) ~]# netstat -tulnp | grep 8080 tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN **12749**/java [root#ip-112-x6x-2x-xxx.xxxxx.compute.internal (aws_main) ~]#
run
kill -9 12749
Then try to relaunch the container it should work
If redis server is started as a service, it will restart itself when you using kill -9 <process_id> or sudo kill -9 `sudo lsof -t -i:<port_number>` . In that case you will need to stop the redis service using following command.
sudo service redis-server stop
I upgraded my docker this afternoon and ran into the same problem. I tried restarting docker but no luck.
Finally, I had to restart my computer and it worked. Definitely a bug.
Check docker-compose.yml, it might be the case that the port is specified twice.
version: '3'
services:
registry:
image: mysql:5.7
ports:
- "3306:3306" <--- remove either this line or next
- "127.0.0.1:3306:3306"
Changing network_mode: "bridge" to "host" did it for me.
This with
version: '2.2'
services:
bind:
image: sameersbn/bind:latest
dns: 127.0.0.1
ports:
- 172.17.42.1:53:53/udp
- 172.17.42.1:10000:10000
volumes:
- "/srv/docker/bind:/data"
environment:
- 'ROOT_PASSWORD=secret'
network_mode: "host"
I ran into the same issue several times. Restarting docker seems to do the trick
A variation of #DmitrySandalov's answer: I had tomcat/java running on 8080, which needed to keep going. Looked at the docker-compose.yml file and altered the entry for 8080 to another of my choosing.
nginx:
build: nginx
ports:
#- '8080:80' <-- original entry
- '8880:80'
- '8443:443'
Worked perfectly. (The only wrinkle is the change will be wiped if I ever update the project, since it's coming from an external repo.)
At first, make sure which service you are running in your specific port. In your case, you are already using port number 3000.
netstat -aof | findstr :3000
now stop that process which is running on specific port
lsof -i tcp:3000
I resolve the issue by restarting Docker.
It makes more sense to change the port of the docker update instead of shutting down other services that use port 80.
Just a side note if you have the same issue and is with Windows:
In my case the process in my way is just grafana-server.exe. Because I first downloaded the binary version and double click the executable, and it now starts as a service by user SYSTEM which I cannot taskkill (no permission)
I have to go to "Service manager" of Windows and search for service "Grafana", and stop it. After that port 3000 is no longer occupied.
Hope that helps.
The one that was using the port 8888 was Jupiter and I had to change the configuration file of Jupiter notebook to run on another port.
to list who is using that specific port.
sudo lsof -i -P -n | grep 9
You can specify the port you want Jupyter to run uncommenting/editing the following line in ~/.jupyter/jupyter_notebook_config.py:
c.NotebookApp.port = 9999
In case you don't have a jupyter_notebook_config.py try running jupyter notebook --generate-config. See this for further details on Jupyter configuration.
Before it was running on :docker run -d --name oracle -p 1521:1521 -p 5500:5500 qa/oracle
I just changed the port to docker run -d --name oracle -p 1522:1522 -p 5500:5500 qa/oracle
it worked fine for me !
On my machine a PID was not being shown from this command netstat -tulpn for the in-use port (8080), so i could not kill it, killing the containers and restarting the computer did not work. So service docker restart command restarted docker for me (ubuntu) and the port was no longer in use and i am a happy chap and off to lunch.
maybe it is too rude, but works for me. restart docker service itself
sudo service docker restart
hope it works for you also!
I have run the container with another port, like... 8082 :-)
I came across this problem. My simple solution is to remove the mongodb from the system
Commands to remove mongodb in Ubuntu:
sudo apt-get purge mongodb mongodb-clients mongodb-server mongodb-dev
sudo apt-get purge mongodb-10gen
sudo apt-get autoremove
Let me add one more case, because I had the same error and none of the solutions listed so far works:
serv1:
...
networks:
privnet:
ipv4_address: 10.10.100.2
...
serv2:
...
# no IP assignment, no dependencies
networks:
privnet:
ipam:
driver: default
config:
- subnet: 10.10.100.0/24
depending on the init order, serv2 may get assigned the IP 10.10.100.2 before serv1 is started, so I just assign IPs manually for all containers to avoid the error. Maybe there are other more elegant ways.
I have the same problem and by stopping docker container it was resolved.
sudo docker container stop <container-name>
i solved with this sudo service redis-server stop

Securing a localhost port for a Flask/Celery app running locally on 0.0.0.0 in Docker on MacOS

My app has a Flask backend and an Angular/Electron frontend. The app runs locally on Mac Catalina. Flask, Celery and Redis are in separate docker containers, while the frontend is outside Docker. The Flask container is listening on port 0.0.0.0:5078. I've set CORS policy to allow messages from "127.0.0.1:4200" only, sent by the frontend. There's no need for internet connections. The backend containers will be launched by the frontend by emulating a terminal command. I'll install the app remotely on non-technical users' Catalina MacBooks.
Question: According to Docker might be exposing ports to the world, Beware of exposing ports in Docker and Docker not blocked by macOS firewall, this use of 0.0.0.0:5078 is a security threat. How can I resolve this threat, eg by block ing any external connections to this port?
Here's some python 3.8 code
# imports: waitress, flask_cors, blueprint
cors = CORS(blueprint, resources={r"/*": {"origins":["http://127.0.0.1:4200"]}})
if __name__ == "__main__":
serve(flask_app, host= '0.0.0.0', port=5078, threads=8)
Here's the Dockerfile:
FROM python:3.8.3-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
ENV BUILD_DEPS="build-essential" \
APP_DEPS="curl libpq-dev"
RUN apt-get update \
&& apt-get install -y ${BUILD_DEPS} ${APP_DEPS} --no-install-recommends \
&& pip install --default-timeout=10000 -r requirements.txt
ARG FLASK_ENV="development"
ENV FLASK_ENV="${FLASK_ENV}" \
FLASK_APP="back5x.api.app" \
PYTHONUNBUFFERED="true"\
FLASK_DEBUG=1
COPY . .
RUN ["chmod", "+x", "/app/docker-entrypoint.sh"]
ENTRYPOINT ["/app/docker-entrypoint.sh"]
EXPOSE 5078
CMD ["python", "main.py"]
and the docker-compose:
version: "3.8"
services:
redis:
# ...
web:
build:
context: "."
args:
- "FLASK_ENV=development"
depends_on:
- "redis"
- "worker"
env_file:
- ".env"
environment:
FLASK_DEBUG: 1
FLASK_APP: back5x.api.app.py
healthcheck:
test: "${DOCKER_HEALTHCHECK_TEST:-curl localhost:5078/healthy}"
...
ports:
- "5078:5078"
restart: "unless-stopped"
volumes:
- ".:/app"
worker: #celery worker
...
volumes:
redis: {}
Tried:
The Docker-based solutions I've found use Linux iptables, eg Disallow egress from Docker containers on Docker for Mac and the above references. So I've added these to the Dockerfile:
RUN apt-get install -y iptables --no-install-recommends #after pip install
RUN iptables -N DOCKER-USER #after COPY . .
RUN iptables -I FORWARD -j DOCKER-USER
RUN iptables -A DOCKER-USER -j RETURN
RUN iptables -I DOCKER-USER -i eth0 ! -s 0.0.0.0 -j DROP
Without the middle three lines, I got an error that DOCKER-USER couldn't be found; with them, that I must run as a root. I've tried a privileged mode and app_cap, but as I'm new to Docker, I haven't got this to work.
I've also looked into defining a rule in Mac's PF firewall to block external connections to the port in question. However, this is not ideal for people who'll be using my app. A similar situation is with installing the paid-for "Little Snitch" app.
Before going this route, could there be a code or Docker-based solution? (Or perhaps there is an appropriate command for launching the backend?)
A working solution is based on the David Maze and Matt's comments and on this question. These are the steps:
Open Docker for Mac, Preferences and Docker Engine. Add "ip": "127.0.0.1" to the json config file.
In the docker-compose, set ports to "127.0.0.1:5078:5078" for the web service.
Leave the Dockerfile and the python code as before: ie, the flask host is still 0.0.0.0
As I checked, messages from Electron's localhost 4201 went through. Also, running netstat -anvp tcp | awk 'NR<3 || /LISTEN/ shows that the insecure port 0.0.0.0.5078 is no longer exposed to the outside.
`

Docker & Postgres: Failed to bind tcp 0.0.0.0:5432 address already in use

Problem
I'm trying to start postgres in a docker container on my Mac, but I keep getting the following error message
docker: Error response from daemon: driver failed programming external connectivity on endpoint postgres (8392b9e5cfaa28f480fe1009dee461f97e82499726f4afc4e916358dd2d2f61e): Error starting userland proxy: Failed to bind tcp 0.0.0.0:5432 address already in use.
I have postgres installed locally, but I stopped it and running
pg_ctl status
returns
pg_ctl: no server running
I've ran the following to check what's running on 5432
lsof -i tcp:5432
&
netstat -anp tcp | grep 5432
and nothing is running on the port.
Versions
Mac - OS X El Capitan Version 10.11.2
PostgreSQL - 9.5
Docker - Docker version 1.12.0-rc2, build 906eacd, experimental
If lsof -i :5432 doesn't show you any output, you can use sudo ss -lptn 'sport = :5432' to see what process is bound to the port.
Proceed further with kill <pid>
If you execute lsof -i :5432 on the host you can see what process is bound to the port.
Some instance of Postgres is running. You can execute kill <pid> to kill it if you want. You can also use 5432 instead of 5432:5432 in your docker command or docker-compose file and let docker choose the host port automatically.
The first thing you should do is stop PostgreSQL service.
In most cases it fixed the issue.
sudo service postgresql stop
If above doesn't work. then add the following line to /etc/postgresql/12/main/postgresql.conf
sudo vim /etc/postgresql/12/main/postgresql.conf
## good if you add under CONNECTION AND AUTHENTICATION comments
listen_addresses = "*"
macOS Monterey
None of the above commands worked for me - need to do few changes. So, adding the complete working solution:
Identify what is running in port 5432: sudo lsof -i :5432
Kill all the processes that are running under this port: sudo kill -9 <pid>
Run the command again to verify no process is running now: sudo lsof -i :5432
it is worked for me, probably you should stop postgres :
sudo systemctl stop postgresql
In some cases it is critical to perform a more in-depth debugging to the problem before stopping or killing the container/process.
Consider following the checklist below:
1) Check you current docker compose environment
Run docker-compose ps. If port is in use by another container, stop it with docker-compose stop <service-name-in-compose-file> or remove it by replacing stop with rm.
2) Check the containers running outside your current workspace
Run docker ps to see list of all containers running under your host.
If you find the port is in use by another container, you can stop it with docker stop <container-id>.
(*) Because you're not under the scope of the origin compose environment - it is a good practice first to use docker inspect to gather more information about the container that you're about to stop.
3) Check if port is used by other processes running on the host
For example if the port is 6379 run:
$ sudo netstat -ltnp | grep ':6379'
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 915/redis-server 12
tcp6 0 0 ::1:6379 :::* LISTEN 915/redis-server 12
(*) You can also use the lsof command which is mainly used to retrieve information about files that are opened by various processes (I suggest running netstat before that).
So, In case of the output above the PID is 915. Now you can run:
$ ps j 915
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 915 915 915 ? -1 Ssl 123 0:11 /usr/bin/redis-server 127.0.0.1:6379
And see the ID of the parent process (PPID) and the execution command.
You can also run: $ pstree -s <PID> to a visual display of the process and its related processes (install with: brew install pstree).
In our case we can see that the process probably is a daemon (PPID is 1) - In that case consider running: A) $ cat /proc/<PID>/status in order to get a more in-depth information about the process like the number of threads spawned by the process, its capabilities, etc'.
B) $ systemctl status <PID> in order to see the systemd unit that caused the creation of a specific process. If the service is not critical - you can stop and disable the service.
4) Restart Docker service
Run sudo service docker restart.
5) You reached this point and..
Only if its not placing your system at risk - consider restarting the server.
None of these other answers worked for me. (For example, lsof and netstat just returned empty lines.) The following worked, though:
sudo -u postgres pg_ctl -D /Library/PostgreSQL/13/data stop
In case of mac,
if you are OK with uninstalling the POSTGRES for the time being:
brew uninstall postgres
Then check if the process still exists
sudo lsof -nP -i4TCP:5432 | grep LISTEN
If it exists, then kill it
kill -9 <pid>
Check again if the 5432 is being listened at, this time it should not be.
Go to project and click on docker-compose.yml
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
volumes:
- .:/app
links:
- db
- mail-server
db:
image: "postgres"
environment:
POSTGRES_PASSWORD: hunter2
ports:
- "5432:9432"
mail-server:
image: "mailhog/mailhog"
expose:
- 1025
ports:
- "8026:8026"
"
change the ports to 8026:8026 because there is already running another container on this port number only change the port number"
This command line is very simple and easy to remember, using third party javascript packages. npx comes built in with Node.js:
npx kill-port 3000
For a more powerful tool with search:
npx fkill-cli
I tried to sudo kill -9 <PID> to disable postgres process, but it spawns again and again with a different PID. After that, I found that it stores a process under LaunchDemos and it runs on every startup:
cd /Library/LaunchDemos/
sudo rm com.edb.launchd.postgresql-13.plist
Restart your PC to apply changes.

Debugging Tomcat in Docker container

I have a CoreOS running in Vagrant. Vagrant private network IP is 192.168.111.1. Inside a CoreOS is a docker container with Tomcat 8.0.32. Pretty much everything works ok (app deployment etc.) just debugging does not. Tomcat is mapped to 8080 port and the JPDA port should be 8000.
Facts
Tomcat JPDA is configured with:
JDPA_OPTS -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000
It starts with catalina.sh jpda start command. The output in the console when running it with docker-compose is:
tomcat | Listening for transport dt_socket at address: 8000
From the container info I assume that ports are mapped as they should:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dcae1e0148f8 tomcat "/run.sh" 8 minutes ago Up 8 minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:8080->8080/tcp tomcat
My docker image is based on this Dockerfile.
Problem
When trying to run Remote debug configuration (screenshot below) I get the error Error running Debug: Unable to open debugger port (192.168.111.1:8000): java.net.ConnectException "Connection refused". I've tried everything from changing various configuration but no luck. Am I missing something?
This is the command I use for this:
docker run -it --rm \
-e JPDA_ADDRESS=8000 \
-e JPDA_TRANSPORT=dt_socket \
-p 8888:8080 \
-p 9000:8000 \
-v D:/tc/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml \
tomcat:8.0 \
/usr/local/tomcat/bin/catalina.sh jpda run
Explanation
-e JPDA_ADDRESS=8000debugging port in container, passed as environment variable
-e JPDA_TRANSPORT=dt_sockettransport type for debugging as socket, passed as environment variable
-p 8888:8080 expose tomcat port 8080 on host as port 8888
-p 9000:8000 expose java debugging port 8000 on host as port 9000
-v {host-file}:{container-file}overwrite tomcat-user.xml with my local on, since I need access to the manager apiomit this line if this isn't necessary for your use case
tomcat:8.0see https://hub.docker.com/_/tomcat/
/usr/local/tomcat/bin/catalina.sh jpda runcommand to run in the container
The accepted answer didn't work for me, apparently because I was using Java 11. It seems that if you're using Java 9 or newer, you need to specify the JPDA address like this:
JPDA_ADDRESS=*:8100
You can always update the Dockerfile to something like the following: -
FROM tomcat:8-jre8
MAINTAINER me
ADD target/app.war /usr/local/tomcat/webapps/app.war
ENV JPDA_ADDRESS="8000"
ENV JPDA_TRANSPORT="dt_socket"
EXPOSE 8080 8000
ENTRYPOINT ["catalina.sh", "jpda", "run"]
This does mean though that your docker file has debug on by default which is probably not suited to a production environment.
Try add to your Dockerfile
ENV JPDA_ADDRESS=8000
ENV JPDA_TRANSPORT=dt_socket
It works for me
You need to make sure that port 8080 is exposed to IntelliJ for connection. That is while running docker you shall require something like docker run -p 8080:8080
For example, I am able to achieve the similar requirement like this by doing below mentioned steps/checks.
This is what my docker run command looks like:
sudo docker run --privileged=true -d -p 63375:63375 -p 63372:8080 -v /tmp/:/usr/local/tomcat/webapps/config <container name>:<tag>
NOTE: I am exposing an extra port 63375 on container and on my host both. The same port I am using in CATALINA_OPTS below.
This is what my entry point (for the image that I am building) looks like. NOTE: I am using CATALINA_OPTS. Also, I am using maven to create image so below is excrept from pom.xml.
<entryPoint>
<shell>cd /usr/local/tomcat/bin; CATALINA_OPTS="-agentlib:jdwp=transport=dt_socket,address=63375,server=y,suspend=n" catalina.sh run</shell>
</entryPoint>
I resolved a similar, if not the same, issue when using docker-compose.
It involved the environment variables not being passed properly from the docker-compose.yml file.
See my stack overflow issue:
For me is cleaner this way:
docker run -e JAVA_TOOL_OPTIONS="-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n" -p 8000:8000 tomcat:8.5-jdk8
This way you don't have to modify your container Dockerfile.
Explanation: all java version check the JAVA_TOOL_OPTIONS environment variable: https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/envvars002.html
I have similar setup in my local environment. I included JPDA_ADDRESS as environment variable in the Dockerfile and recreated the containers.
ENV JPDA_ADDRESS 8000
#Expose port 8080, JMX port 13333 & Debug port 8000
EXPOSE 8080 13333 8000
CMD ["tail", "-f", "/dev/null"]

Not able to access Kibana running in a Docker container on port 5601

I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app

Resources