I want to disable all outgoing connections that are initiated by docker containers to the outside world. I can do this in linux by adding a rule to the FORWARD chain in linux. How do I do this in Docker for Mac?
I found out that Docker for Mac uses an xhyve vm and that’s where docker0 interface lives. What interface in the host does this connect to? I used nettop on Mac and I see that Docker uses my en0 wireless interface. But, I’m not sure if Docker and xhyve are using the same interface.
Edit: Added docker-for-windows tag because they might have similar solutions (Hoping)
Edit 2: Docker for Mac has changed so the accepted solution changed a bit
Docker
$ docker run --net=host --privileged -ti alpine sh
# apk update && apk add iptables
# iptables -vnL
This and the rules could be turned into a Dockerfile and run with a -- restart option. I think on-failure might work to reapply the rules when Docker for Mac starts up.
Virtual Machine
To get to the linux VM:
mac$ brew install screen
mac$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Since the move to linuxkit, this is not your average linux host, everything's a container:
linuxkit:~# ctr -n services.linuxkit tasks ls
TASK PID STATUS
acpid 925 RUNNING
diagnose 967 RUNNING
host-timesync-daemon 1116 RUNNING
ntpd 1248 RUNNING
vpnkit-forwarder 1350 RUNNING
docker-ce 1011 RUNNING
kubelet 1198 RUNNING
trim-after-delete 1303 RUNNING
vsudd 1398 RUNNING
Use runc to move into the docker-ce (or docker) namespace
linuxkit:~# runc --root /run/containerd/runc/default exec -t docker-ce /bin/sh
docker-ce # iptables -vnL
Note that rules will disappear after a restart of Docker for Mac. I haven't found the secret sauce for persisting system changes yet.
Use ctrl-a then d to exit the screen session otherwise you will bork the terminal.
OSX
For the easy but € option, use Little Snitch and block outbound connections on OSX from com.docker.supervisor via vpnkit.
Try Mac's pfctl command, it's kind of equivalent to iptables.
Here's man page: https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man8/pfctl.8.html
Related
i am attempting to run a windows GUI app in a container on Linux. The intent is to protect an ancient windows app that is no longer supported. So i get a Red Hat developer subscription, install RHEL 8.6 with container tools, run the universal base image 'UBI-INIT', and within the container, i install GNOME desktop with Xrdp, and i successfully render the GUI desktop in a RHEL container.
Now that the container is working well, I commit to an image, but when i run that image, the GUI fails to render. the xrdp session times out as if services are not running and/or ports are not accessible.
Within the container that i ran from the committed image:
i verify that all of the services necessary to support XRDP and GNOME are up and running.
journalctl does not seem to show any errors. There are complaints around rtkit but i see similar errors in the working container.
i see no evidence that an xrdp connection was attempted in the xrdp or xrdp-sesman logs. But i am fairly certain that ports are not the issue because i can ssh to the container.
the commands i used to install and configure the working container are:
podman run -d -v /mnt/share:/share -p 53389:3389 -p 50022:22 --rm --privileged --name ubi-ini registry.access.redhat.com/ubi8/ubi-init;
podman exec -it ubi-ini bash
and within the container i run the following:
timedatectl set-timezone America/New_York
# GNOME desktop GUI
dnf install -y selinux-policy-targeted
dnf groupinstall -y --skip-broken "Server with GUI"
# xrdp
dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf install -y xrdp
echo 'if [ "$DISPLAY" !=' "'\"\"' ]; then xhost +; fi;" >> /etc/profile
sed -i '/^port=3389.*/a port=tcp://:3389' /etc/xrdp/xrdp.ini
useradd -g root -p $(echo "jde" | openssl passwd -1 -stdin) jde
usermod -aG wheel jde
systemctl enable xrdp xrdp-sesman gdm
systemctl unmask systemd-logind.service
systemctl restart sshd xrdp xrdp-sesman dbus gdm systemd-logind.service
I commit the image like this:
podman commit ubi-ini ubi-gui
I run the image with this command:
podman run -d -v /mnt/share:/share -p 63389:3389 -p 60022:22 --rm --privileged --name ubi-gui ubi-gui
xrdp communicates with the desktop manager through systemd UBI-INIT is the only linux base container that supports systemd.
i suspect there is something about the processes in the derived container but when i compare the working and non-working container with ps aux, i don't see significant anomolies.
Any ideas?
I have absolutely no idea about 'XRDP' but I see you use a different host port in your second container instance, is that intentional?
Got this to work by disabling firewall and selinux everywhere, meaning the container host abd the base container UBI-INIT as well. Now the image based on the modified container (with Gnome desktop and XRDP and disabled security) results in a container that serves the GUI desktop.
It's working fine except that gdm (gnome desktop manager) does not start even though it is enabled and all the other enabled services are ok. Still working that one out, but the basic question is answered: it was not the software stack but rather it was security configuration. i suspect selinux in the container somehow interfered with inter-process communication, because i am able to ssh on (mapped) port 22 externally.
My docker version is:
docker --version
Docker version 20.10.2, build 2291f61
My windows version is:
systeminfo
Nom du système d’exploitation: Microsoft Windows 10 Professionnel
Version du système: 10.0.17763 N/A version 17763
Type du système: x64-based PC
My Dockerfile is:
FROM ubuntu:21.04
RUN apt update
RUN apt-get install -y bluez bluetooth usbutils
When I run the following command, I start the 'bluetooth_in_docker' container:
docker build -t bluetooth_in_docker . & docker run --privileged --net=host -it bluetooth_in_docker bash
Inside the container when I run the following, I get an error:
hciconfig dev
Can't open HCI socket.: Address family not supported by protocol
I got it working on Windows from inside WSL2, but it takes a lot of steps.
Follow https://github.com/dorssel/usbipd-win/discussions/310 to get
your bluetooth working inside WSL2. Verify that you can scan for
bluetooth devices inside your WSL2 distro.
modify your dockerfile to install bluetooth as you did (bluez and usb-utils might not be needed)
Now there are 2 options. First option shares bluetooth with container. Second option gives container exclusive control.
Sharing bluetooth between the host and the container is possible by making a volume mount of /var/run/dbus and running it with --privileged:
docker run -v /var/run/dbus/:/var/run/dbus/:z --privileged {containerImage}
Make sure that the dbus and bluetooth services are working in your host when running the container this way.
Giving the container exclusive control: in WSL2 (the host), run a docker container according to https://stackoverflow.com/a/64126744/1114918
run sudo service bluetooth stop to make your bluetooth not "claimed" by the host (the linked answer uses killall, I think sudo service ... stop is cleaner)
use a sh script to start dbus and bluetooth inside the container
run the container using
docker run --rm --net=host --privileged myimage:mytag
I'm using this container to start elasticsearch in docker. In accordance with the manual I have to update max_map_count to start the container
sudo sysctl -w vm.max_map_count=262144
but.. I can update it in my host (container) AFTER I start it while I'm unable to start it. Am I doing something wrong?
ERROR: bootstrap checks failed max virtual memory areas
vm.max_map_count [65530] likely too low, increase to at least [262144]
If I try to do it on my host machine (which is Mac) I get the following error.
sysctl: unknown oid 'vm.max_map_count'
Docker engine installs the Lunix VM where all containers are running. So the command to increase the limit should be executed for the Linux host, not for the Mac.
How can I access Linux VM via terminal installed by the Docker engine?
On Docker Toolbox
If you are in docker toolbox try the docker client from terminal and then make the configs:
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
exit
On Docker For Mac:
The vm_max_map_count setting must be set within the xhyve virtual machine:
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
If it asks you for a username and passwordLog in with root and no password.
If it just has a blank screen, press RETURN.
Then configure the sysctl setting as you would for Linux:
sysctl -w vm.max_map_count=262144
Exit by Control-A Control-\.
Se the docs here.
Persistence
In some cases, this change does not persist across restarts of the VM. So, while screen'd into, edit the file /etc/sysctl.d/00-alpine.conf and add the parameter vm.max_map_count=262144 to the end of file.
On Latest Docker For Mac (Version 18.06.0-ce-mac70):
$ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
On the blank screen, press RETURN.
Then configure the sysctl setting as you would for Linux:
sysctl -w vm.max_map_count=262144
Exit by Control-A Control-\
For those using Docker Desktop on windows 10 you have to execute:
wsl -d docker-desktop in the command line before sysctl -w vm.max_map_count=262144
For those who use docker desktop on mac, you can easily increase the memory by the following steps:
click on docker desktop -> preferences...
navigate to 'Resources'
change the memory to whatever you need
click on 'Apply & Restard'
Folder has been moved and this is the new location -
$screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
and then
sysctl -w vm.max_map_count=262144
For mac users you might have a problem connecting to docker VM, therefore you should run this command to enter the shell of the Docker VM:
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
GitHub issue description: https://github.com/docker/for-mac/issues/4822
If you have installed Docker from docker's Mac installer then you will have Docker desktop installed(also includes includes Docker Engine, Docker CLI client, Docker Compose, Notary, Kubernetes, and Credential Helper.)
Here is what Docker desktop looks like in 2021 where you can change memory/swap or any other resources.
Step 1 - click on docker's preferences as shown below.
Step 2 - Click on resources tab, here you can tweak the resources and finally click on "Apply and Restart" button.
Please ignore configuration what I made. You can set based on your requirement.
I want to create a docker image for a GUI application (e.g. Chrome) and I hope this GUI app could run at a bare Linux server without X server installed.
I know it is very easy to create and run a docker image just for X Window Client (The GUI application itself). This needs X server be installed and run at host.
sudo docker run -ti -v /tmp/.X11-unix:/tmp/.X11-unix xorg xterm -display :0
But for me, I need both X client and server run in docker container.
Here's my dockerfile:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y xorg
And I run the image by command:
sudo docker run -i -t --rm -e DISPLAY=:0 --privileged xorg xinit
The X server could be started and my screen turns black, after a few seconds, the xterm window displays. BUT, I can't use keyboard and mouse. The screen seems like freezen
I have searched and tried many solutions but no one could fix this problem. (the virtual x-server is not I needed)
I have resolved this problem.
At first, I thought maybe x server in docker container cannot access host devices, and I spent much time on LXC/cgroup. For example, I changed the docker exec engine to LXC, and I added option '--lxc-conf='lxc.cgroup.devices.allow = c 13:* rwm', and I also created /dev/input/* in container.
All of these operations are unnecessary.
If we run docker container in privileged mode, all host devices will be added automatically. Or we can use options like '--device=/dev/input/mice' to share host device.
The real problem is that x server could not discovery and add device automatically. I don't know why. But we could modify x server's configuration and customize the device.
add file /etc/X11/xorg.conf.d/10-input.conf
Section "ServerFlags"
Option "AutoAddDevices" "False"
EndSection
Section "ServerLayout"
Identifier "Desktop"
InputDevice "Mouse0" "CorePointer"
InputDevice "Keyboard0" "CoreKeyboard"
EndSection
Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
Option "Device" "/dev/input/event2"
EndSection
Section "InputDevice"
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/input/mice"
Option "ZAxisMapping" "4 5 6 7"
EndSection
and run docker container:
docker run -i -t -v /tmp/.X11-unix:/tmp/.X11-unix --rm --privileged ubuntu startx
At first make sure that proper input modules are installed:
RUN DEBIAN_FRONTEND='noninteractive' apt-get install -y --no-install-recommends xserver-xorg-input-evdev xserver-xorg-input-all
In modern Linux udev is responsible for managing device nodes (including USB keyboards) in the /dev tree. It uses /run/udev/data which isn't available inside your container even with -privileged option.
So you need to mount that folder explicitly using -v /run/udev/data:/run/udev/data like this:
docker run -i -t -v /tmp/.X11-unix:/tmp/.X11-unix --rm --privileged -v /run/udev/data:/run/udev/data ubuntu startx
I've installed boot2docker (full install) on Windows 7 and am trying to run the container port redirection demo:
docker run --rm -i -t -p 80:80 nginx
Which looks like it isn't quite finishing properly, it just stops and looks like this:
When I open another git bash shell and run boot2docker ip I get 192.168.59.103, and when I pop that in chrome I get Error code: ERR_CONNECTION_TIMED_OUT
It works fine for me with plain docker on Ubuntu 14.04. What else do I need to do to make it work with boot2docker on windows?
Looking more closely, my problem is the same as this question: Docker, can't reach “rails server” development from localhost:3000 using docker flag -p 3000:3000
The answer to that question that worked for me was this one, which simply says to run
boot2docker ssh -L 8080:localhost:80
at the terminal before starting boot2docker
In my case I do this (from a git bash terminal):
boot2docker init # from https://github.com/boot2docker/boot2docker
boot2docker up
boot2docker ssh -L 8787:localhost:8787 # sets up port forwarding and starts boot2docker
docker run -d -p 8787:8787 cboettig/rstudio # starts the container I want
then go to my web browser in windows and point it to http://localhost:8787/ and I get a server instance of RStudio. When I'm done:
docker rm -f $(docker ps -a -q) # delete all containers
UPDATE: downgrading to an earlier version of VirtualBox will fix this
After struggling with folder sharing I regressed through previous versions of VirtualBox and found that with version 4.3.12 I could enable folder sharing and have the port forwarded exactly according to the official instructions, that is I could access my docker container at 192.168.59.103. So downgrading VirtualBox is another option for working around this problem.
ANOTHER UPDATE: updating to the new release of v1.3.1 of boot2docker will fix this
This release just came out a week ago and includes VirtualBox Guest Additions, which simplifies all of this. I now simply do
boot2docker ssh # start boot2docker
docker run -d -p 8787:8787 -v /c/Users/foobar:/home/rstudio/foobar rocker/rstudio
And I get everthing working as expected and can log into RStudio in my browser at http://localhost:8787/ (linux) or http://192.168.59.103:8787 (Windows) and it just works.
In this case I've also got folder sharing working with, /c/Users/foobar corresponding to an existing folder on my computer at C:/Users/foobar, and foobar can be anything. With this method I can read and write files both ways between Windows and RStudio and I don't need to connect to a special IP address like the samba method does in the official docs
I had this problem too after a couple of failed attempts to boot2docker start. This created multiple entries of host-only networks configured on VirtualBox (VirtualBox Host-Only Ethernet Adapter #2, VirtualBox Host-Only Ethernet Adapter #3), and probably the boot2docker's VM was using a bad one.
I cleaned up using Virtualbox standard UI, leaving only one of the networks and now everything works fine.
I'm using boot2docker 1.5.0.
Just to register something that happened to me, and made me lose a couple of hours.