Securing a localhost port for a Flask/Celery app running locally on 0.0.0.0 in Docker on MacOS - macos

My app has a Flask backend and an Angular/Electron frontend. The app runs locally on Mac Catalina. Flask, Celery and Redis are in separate docker containers, while the frontend is outside Docker. The Flask container is listening on port 0.0.0.0:5078. I've set CORS policy to allow messages from "127.0.0.1:4200" only, sent by the frontend. There's no need for internet connections. The backend containers will be launched by the frontend by emulating a terminal command. I'll install the app remotely on non-technical users' Catalina MacBooks.
Question: According to Docker might be exposing ports to the world, Beware of exposing ports in Docker and Docker not blocked by macOS firewall, this use of 0.0.0.0:5078 is a security threat. How can I resolve this threat, eg by block ing any external connections to this port?
Here's some python 3.8 code
# imports: waitress, flask_cors, blueprint
cors = CORS(blueprint, resources={r"/*": {"origins":["http://127.0.0.1:4200"]}})
if __name__ == "__main__":
serve(flask_app, host= '0.0.0.0', port=5078, threads=8)
Here's the Dockerfile:
FROM python:3.8.3-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
ENV BUILD_DEPS="build-essential" \
APP_DEPS="curl libpq-dev"
RUN apt-get update \
&& apt-get install -y ${BUILD_DEPS} ${APP_DEPS} --no-install-recommends \
&& pip install --default-timeout=10000 -r requirements.txt
ARG FLASK_ENV="development"
ENV FLASK_ENV="${FLASK_ENV}" \
FLASK_APP="back5x.api.app" \
PYTHONUNBUFFERED="true"\
FLASK_DEBUG=1
COPY . .
RUN ["chmod", "+x", "/app/docker-entrypoint.sh"]
ENTRYPOINT ["/app/docker-entrypoint.sh"]
EXPOSE 5078
CMD ["python", "main.py"]
and the docker-compose:
version: "3.8"
services:
redis:
# ...
web:
build:
context: "."
args:
- "FLASK_ENV=development"
depends_on:
- "redis"
- "worker"
env_file:
- ".env"
environment:
FLASK_DEBUG: 1
FLASK_APP: back5x.api.app.py
healthcheck:
test: "${DOCKER_HEALTHCHECK_TEST:-curl localhost:5078/healthy}"
...
ports:
- "5078:5078"
restart: "unless-stopped"
volumes:
- ".:/app"
worker: #celery worker
...
volumes:
redis: {}
Tried:
The Docker-based solutions I've found use Linux iptables, eg Disallow egress from Docker containers on Docker for Mac and the above references. So I've added these to the Dockerfile:
RUN apt-get install -y iptables --no-install-recommends #after pip install
RUN iptables -N DOCKER-USER #after COPY . .
RUN iptables -I FORWARD -j DOCKER-USER
RUN iptables -A DOCKER-USER -j RETURN
RUN iptables -I DOCKER-USER -i eth0 ! -s 0.0.0.0 -j DROP
Without the middle three lines, I got an error that DOCKER-USER couldn't be found; with them, that I must run as a root. I've tried a privileged mode and app_cap, but as I'm new to Docker, I haven't got this to work.
I've also looked into defining a rule in Mac's PF firewall to block external connections to the port in question. However, this is not ideal for people who'll be using my app. A similar situation is with installing the paid-for "Little Snitch" app.
Before going this route, could there be a code or Docker-based solution? (Or perhaps there is an appropriate command for launching the backend?)

A working solution is based on the David Maze and Matt's comments and on this question. These are the steps:
Open Docker for Mac, Preferences and Docker Engine. Add "ip": "127.0.0.1" to the json config file.
In the docker-compose, set ports to "127.0.0.1:5078:5078" for the web service.
Leave the Dockerfile and the python code as before: ie, the flask host is still 0.0.0.0
As I checked, messages from Electron's localhost 4201 went through. Also, running netstat -anvp tcp | awk 'NR<3 || /LISTEN/ shows that the insecure port 0.0.0.0.5078 is no longer exposed to the outside.
`

Related

Run Laravel docker image with exposing ports -p

I have a laravel app but I can't make it run with docker run command. The last two instructions are
EXPOSE 9000
CMD ["php", "artisan", "serve","--port=9000"]
I am trying to make it run trying with:
docker run -p 9000:9000 my_image:latest
docker run --net="host" -p 9000:9000 my_image:latest
docker run --net="bridge" -p 9000:9000 my_image:latest
The only thing I see is the classic laravel output
Laravel development server started: <http://127.0.0.1:9000>
What am I missing?
The problem is 127.0.0.1:9000, i.e., the server is bound to localhost within the container, instead of listening on an external interface. The solution is to use the --host 0.0.0.0 argument, which will bind the server to all available interfaces.
CMD ["php", "artisan", "serve", "--host", "0.0.0.0", "--port=9000"]

How to use local proxy settings in docker-compose

I am setting up a new server for our Redmine installation, since the old installation was done by hand, which makes it difficult to update everything properly. I decided to go with a Docker image but am having trouble starting the docker container due to an error message. The host is running behind a proxy server, which I think, is causing this problem, as everything else such as wget, curl, etc. is working fine.
Error message:
Pulling redmine (redmine:)...
ERROR: Get https://registry-1.docker.io/v2/: dial tcp 34.206.236.31:443: connect: connection refused
I searched on Google about using Docker/Docker-Compose with a proxy server in the background and found a few websites where people had the same issue but none of these really helped me with my problem.
I checked with the Docker documentation and found a guide but this does not seem to work for me: https://docs.docker.com/network/proxy/
I also found an answered question here on StackOverflow: Using proxy on docker-compose in server which might be the solution I am after but I am unsure where exactly I have to put the solution. I guess the person means the docker-compose.yml file but I could be wrong.
This is what my docker-compose.yml looks like:
version: '3.1'
services:
redmine:
image: redmine
restart: always
ports:
- 80:3000
environment:
REDMINE_DB_MYSQL: db
REDMINE_DB_PASSWORD: SECRET_PASSWORD
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: SECRET_PASSWORD
MYSQL_DATABASE: redmine
I expect to run the following command without the above error message
docker-compose -f docker-compose.yml up -d
I did a bit more research and seem to have used better key words because I found my solution now. I wanted to share the solution with everyone, in case someone else may ever need it.
Create folder for configuring docker service through systemd
mkdir /etc/systemd/system/docker.service.d
Create service configuration file at /etc/systemd/system/docker.service.d/http-proxy.conf and put the following in the newly created file
[Service]
# NO_PROXY is optional and can be removed if not needed
# Change proxy_url to your proxy IP or FQDN and proxy_port to your proxy port
# For Proxy server which require username and password authentication, just add the proper username and password to the URL. (see example below)
# Example without authentication
Environment="HTTP_PROXY=http://proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
# Example with authentication
Environment="HTTP_PROXY=http://username:password#proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
# Example for SOCKS5
Environment="HTTP_PROXY=socks5://proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
Reload systemctl so that new settings are read
sudo systemctl daemon-reload
Verify that docker service Environment is properly set
sudo systemctl show docker --property Environment
Restart docker service so that it uses updated Environment settings
sudo systemctl restart docker
Now you can execute the docker-compose command on your machine without getting any connection refused error messages.
For the proxy server which requires username and password for authentication: Apart from adding the credentials in /etc/systemd/system/docker.service.d/http-proxy.conf, as suggested in this answer, I also had to add the same to the Dockerfile. Following is a snippet from the Dockerfile.
FROM ubuntu:16.04
ENV http_proxy http://username:password#proxy_url:proxy_port
ENV https_proxy http://username:password#proxy_url:proxy_port
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
build-essential \
bla bla bla ...

How to connect to SSHD inside a Docker container from Windows?

I have a Ruby on Rails environment, and I'm converting it to run in Docker. This is largely because the development machine is a Windows laptop and the server is not. I have the Docker container mainly up and running, and now I want to connect the RubyMine debugger. To accomplish this the recommendation is to setup an SSH server in the container.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207649545-Use-RubyMine-and-Docker-for-development-run-and-debug-before-deployment-for-testing-
I successfully added SSHD to the container using the dockerfile lines from https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image minus the EXPOSE 22 (because it wasn't working with the port mapping in the docker-compose.yml). But the port is not accessible on the local machine
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6652389d248c civilservice_web "bundle exec rails..." 16 minutes ago Up 16 minutes 0.0.0.0:3000->3000/tcp, 0.0.0.0:3022->22/tcp civilservice_web_1
When I try to point PUTTY at localhost and 3022, it says that the server unexpectedly closed the connection.
What am I missing here?
This is my dockerfile
FROM ruby:2.2
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
nodejs \
openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
RUN mkdir /MyApp
WORKDIR /MyApp
ADD Gemfile /MyApp/Gemfile
ADD Gemfile.lock /MyApp/Gemfile.lock
RUN bundle install
ADD . /MyApp
and this is my docker-compose.yml
version: '2'
services:
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/CivilService
ports:
- "3000:3000"
- "3022:22"
DOCKER_HOST doesn't appear to be an environment variable
docker version outputs the following
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 60ccb22
Built: Thu Feb 23 10:40:59 2017
OS/Arch: windows/amd64
Server:
Version: 17.03.0-ce
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 07:52:04 2017
OS/Arch: linux/amd64
Experimental: true
docker run -it --rm --net container:civilservice_web_1 busybox netstat -lnt outputs
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
SSHD is now running along side the Rails app, but the recipe that I was working from for setting up the service is not correct for the flavor of Linux that came with my base image https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image
The image I'm using is based on Debian 8. Could someone point me at where the example breaks down?
Your sshd process isn't running. That's visible in the netstat output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
But as user2105103 points out, I should have realized that if I compared your docker-compose.yml with the Dockerfile. You define the sshd command in the image with a Dockerfile line:
CMD ["/usr/sbin/sshd", "-D"]
But then you override your image setting when running the container with the docker-compose command:
command: bundle exec rails s -p 3000 -b '0.0.0.0'
So, the only thing run, as you can see in the netstat, is the rails app listening on 3000. If you need multiple commands to run, then you can docker exec to kick off the second command (not recommended for a second service like this), use a command that launches sshd in the background and rails in the foreground (fairly ugly), or you can consider something like supervisord.
Personally, I'd skip sshd and just use docker exec -it civilservice_web_1 /bin/bash to get a prompt inside the container when you need it.

Not able to access Kibana running in a Docker container on port 5601

I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app

How to force docker build to use devpi server for pip install command?

I am trying to build an image for my flask based web-application using docker build. My Dockerfile looks like this:
FROM beehive-webstack:latest
MAINTAINER Anuvrat Parashar <anuvrat#zopper.com>
EXPOSE 5000
ADD . /srv/beehive/
RUN pip install -i http://localhost:4040/root/pypi/+simple/ -r /srv/beehive/requirements.txt
pip install without the -i flag works, but it downloads everything from pypi which, naturally is slow.
The problem is that pip does not access the devpi server running on my laptop. How can I go about achieving that?
localhost refers to the docker container, not to your host as RUN lines are just executed commands in the container. You thus have to use a network reachable IP of your laptop.
Con: This makes your Dockerfile unportable, if others don't have a pypi mirror running.
One answer is a devpi helper container. You start docker devpi image and have it expose port 3141. Then you can add this as an extra source for pip install using environmental variables in your docker file.
Starting devpi using docker compose:
devpi:
image: scrapinghub/devpi
container_name: devpi
expose:
- 3141
volumes:
- /path/to/devpi:/var/lib/devpi
myapp:
build: .
external_links:
- devpi:devpi
docker-compose up -d devpi
Now you need to configure the client docker container. It needs pip configured:
In your Dockerfile:
ENV PIP_EXTRA_INDEX_URL=http://devpi:3141/root/pypi/+simple/ \
PIP_TRUSTED_HOST=devpi
Check it's working by logging into your container:
docker-compose run myapp bash
pip install --verbose nose
Output should include
2 location(s) to search for versions of nose:
* https://pypi.python.org/simple/nose/
* http://devpi:3141/root/pypi/+simple/nose/
Now you can upload packages to your container either from another container or sftp.
This approach has the advantages of speeding builds but not breaking them if the devpi container is not present.
Notes: Don't publish ports to devpi without a strong password as it's a security issue. People could use it to upload arbitrary code which you application would install and execute.

Resources