Very slow network performance of Docker containers with host's network - macos

I'm having a problem with sluggish network performance between Docker containers and host's network. I asked this question on the Docker's forum but have received no answers so far.
Problem
Set-up: two Macs on the same local network; the first runs an MQTT broker (mosquitto); the second runs Docker for Mac. Two C++ programs run on the second Mac and exchange data multiple times through the MQTT broker (on the first Mac), using the Paho MQTT C library.
Native run: when I ran the two C++ programs natively, the network performance was excellent as expected. The programs were built with XCode 7.3.
Docker runs: when I ran either of the C++ programs, or both of them, in Docker, the network performance dropped dramatically, roughly 30 times slower than the native run. The Docker image is based on ubuntu:latest, and the programs were built by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.1) 5.4.0 20160609.
I tried to use the host network (--network="host" in Docker run) but it didn't help. I also tried to run the MQTT broker on the second Mac (so that the broker and the containers ran on the same host); the problem persisted. The problem existed on both my work LAN and my home network.
In theory, it could have been that the C++ programs were generally slow in Docker containers. But I doubt this was the case because in my experience, the general performance of C++ code in Docker is about as fast as in the native environment.
Question
What could be the cause of this problem? Are there any settings in Docker that can solve this issue?

Your problem sounds very similar to this open issue on the Docker for Mac repo. Unfortunately, there doesn't seem to be a known solution, but the discussion in there may be useful. My personal guess at the moment is that the bug lives near the hyperkit virtualization being used on Docker for Mac specifically.
In my case, I was oddly able to bypass this issue by using a different physical router, but I have no idea why it worked. Sadly that's not really a 'solution' though.
I hate that this isn't a great answer, but I wanted to at least share the discussion in the open issue. Good luck and keep us posted.

I suspect the default allocation of memory and CPU for the containers might not be optimal for the kind of network performance you are trying to achieve.
Investigate the utilization of resources within the containers using standard tools like top, htop, strace etc. Or you can use docker stat command when these instances are in peak operation
$ docker stats node1 node2
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
node1 0.07% 796 KB/64 MB 1.21% 788 B/648 B
node2 0.07% 2.746 MB/64 MB 4.29% 1.266 KB/648 B
Then you might want to modify various resource allocation parameters available with docker run.
EDIT: Another thing to check would be MTU of the actual system interface and the setting on the docker interfaces. Use
--mtu=BYTES to set MTU of your docker values to match your system interface's MTU value

Related

Performance issues on WSL 2

For the last two months I've (tried to) embraced WSL2 as my main development environment. It works fine with most small projects, but when it comes to complex ones, things start to slow down, making working on WSL2 impossible. With complex one I mean a monorepo with React, node, different libraries, etc. This same monorepo, on the same machine, works just fine when running it from Windows itself.
Please note that, when working on WSL2, all my files are in the linux environment; I'm not trying to access Windows files from WSL2.
I've the latest Docker Desktop installed, with WSL2 integration and kubernetes enabled. But the issue persists even with Docker completely stopped.
I've also tried to limit the memory consumption for WSL2, but that doesn't seems to fix the problem.
My machine is an Aero 15X with 16GB of ram. A colleague suggested upgrading to 32GB of ram. But before trying this, or "switching back" to Windows for now, I'd like to see if someone has any suggestions I could test out.
Thanks.
The recent Kernel version Linux MSI-wsl 5.10.16.3 starts slower than previous overall.But the root cause can be outside WSL: if you have a new NVIDIA GeForce card installed Windows gives it to eat as much memory as it can, i.e 6-16 Gb without using it. I had to limit WSL memory to 8Gb to start WSL service without OoM. Try to play with this parameter in .wslconfig in your home directory and look at the WSL_Console_Log in the same place. If the timestamps in this file are in ms my Kernel starts in 55 ms and then hangs on Networking(!!!).
I'm afraid that WSL Kernel network driver
lshw -c network
*-network
description: Ethernet interface
physical id: 1
logical name: eth0
serial: 00:15:5d:52:c5:c0
size: 10Gbit/s
capabilities: ethernet physical
configuration: autonegotiation=off broadcast=yes driver=hv_netvsc driverversion=5.10.16.3-microsoft-standard-WS duplex=full firmware=N/A ip=172.20.186.108 link=yes multicast=yes speed=10Gbit/s
is not so fast how it is expected to be

How containers run on an AWS EC2 virtualization exactly?

As they say, containers do not run on the Hypervisor layer so what happens when we run Container on an AWS EC2 instance? Since EC2 itself is running on a hypervisor layer managed by AWS.
Can someone please help me understand this better?
Thanks,
You probably need to provide more information to back up your question. In the gaps, there are a lot of interpretations for what you are asking, so in a quick return to the basics:
A Hypervisor presents to an Operating System what looks like an independent machine that it can execute on. It might have some weird drivers (aka virtio) but it is just a machine.
A Container presents to a Program what looks like an independent Operating System. It might have some weird IP addresses (127.x.y.z), and stuff, but it looks like it has its own OS.
Note that with [2], the Program can start a whole bunch of other Programs, so as far as it is concerned, it has an entire OS to itself. In reality it is sharing its machine with other Containers, each of which think they have their own machine, and some Root machine which is hosting the Containers.
Containers can host other Containers (well, at least in theory).
Hypervisors can host other Hypervisors (in practise, but your mileage might vary).
So, a Container is an Operating System Instance, running within an Operating System, which might be running within a Virtual Machine on a Hypervisor which is running several Virtual Machines each of which might be running Operating systems which are hosting multiple Containers.
If it strikes you that this is why we can’t have nice things, yes, you are right.

Is it possible to access a hardware device with a Docker image under Windows?

Recently, a native Docker client for Windows was released (>= Windows 7).
I wonder: is it possible to forward access to physical devices, running Windows as host?
With a *nix host, this seems to be possible with the following syntax:
docker run -t -i --device=/dev/ttyUSB0 ubuntu bash
(as proposed here) which would forward the USB device /dev/ttyUSB0 on a *nix system to the docker image.
A description of the --device flag can be found in the docker docs.
What would be the syntax for a Windows host?
Windows USB devices are not currently available to Docker containers run with Docker for Windows.
Answered by a Docker staff member on the 7th of July 2017 in the Docker forum.
https://forums.docker.com/t/exposing-docker-to-usb-device-in-windows-10-with-docker-toolbox/29290/3
This answer is likely to get outdated in some time, provided they will allow for this feature somehow.
This is a bad practice as it goes against the design philosophy of containers.
If you find yourself needing access to a hardware device, it's better to consider full virtualization such as VMware, Hyper-V, KVM/QEMU, Xen and so on.
However, the "proper" way is to design your system
so that hardware is abstracted into a network service. In this way, you deploy the service to physical machines to which the hardware is attached, and call them over the network. I don't know if this is possible in your case, but such decoupling provides a significant architectural advantage.

Docker.io for Windows [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I was reading a nice question about docker - answer has overview of docker implementation details. I was wondering if anything like this is possible to do on Windows platform.
Do Windows alternatives for Docker exist?
Is it theoretically possible to use other (Windows based) components to build it?
Update1:
Slightly related question (sandboxing): Is there a lightweight, programmable Sandbox API for the Windows platform?
Update2::
For info how to install docker on windows (unrelated) - official docs has great instructions how to set up the environment by using boot2docker VM.
You can run docker in a virtual machine.
New Update
Vagrant has now integrated docker support. It can be used as provider or as provisioner. Here are some useful links.
Feature Preview: Docker-Based Development Environments
Vagrant Docs: Docker Provisioner
Vagrant Docs: Docker Provider
Old Update
As seanf pointed out in a comment below, Vagrant support was dropped. Instead they point to boot2docker:
boot2docker is a lightweight Linux distribution based on Tiny Core
Linux made specifically to run Docker containers. It runs completely
from RAM, weighs ~24MB and boots in ~5s (YMMV).
Old answer
The official docker documentation contains a small guide to install docker inside a Vagrant box. Vagrant is a great vm management wrapper. The guide is for Mac/Linux, but you get the idea to do the same in Windows:
http://docs.docker.io/en/latest/installation/vagrant/
This way you can share docker images across multiple systems with different operating systems.
If you're just searching for a way to deploy a pre-packaged set of applications in some sort of container for Windows, with registry and file access being virtualized but without using a full-blown virtual machine image, these (commercial) sandbox-like applications might be worth looking at:
Symantec Workspace Virtualization (get some ready-to-use packages from here)
Evalaze
Cameyo
BoxedApp
Edit: There's a new kid on the block, Spoon supports containers for Windows, and it actually looks very promising.
I have found that at least file system related functionality has Windows (7,8) already in place. One can use VHD files (virtual disks) for handling "images" concept in Docker. These image are used for virtual machine but can be created/attached/used directly by Windows too:
diskpart
DISKPART> create vdisk file=c:\base-image.vhd maximum=200 type=expandable
New image can be layered on top of base image:
DISKPART> create vdisk file=c:\image-2.vhd parent=c:\base-image.vhd
See more information about managing virtual disks.
Unfortunately, process lightweight isolation/sandboxing is probably not possible (at least not simple), although some methods do exists (http://www.sandboxie.com/, Native Client in Google Chrome ...)
Microsoft is working on their own Hyper-V Container that is similar to Docker - Azure also supports the Docker infrastructure.
That aside, it's hard to give precise alternatives but on the Windows side we've had App-V for quite a long time which virtualises and sand-boxes applications so they can be run or streamed without being actually installed on a specific system. I've never meddled with it but it seems to be able to run as a standalone client without any need of the intricate server infrastructure usually involved for anything Microsoft.
From another perspective the disk image format used by Windows (VHD) supports standard differencing so you can easily run many virtual machines from a single read-only OS image where each virtual machine have a tiny write image to handle the differences. These are still fullblown virtual machines though.
I currently don't know of any way to do the same thing on native windows as of right now.
I don't think the windows kernel was built for this sort of thing, so in order for it to be supported Microsoft would have to add the capabilities to the windows kernel. If I'm wrong, someone please correct me.
The most common way for people to do something like this is to use a VM in windows that runs a Linux based OS, and running everything inside of that. You could also do the same thing using FreeBSD (Jails), and Solaris (zones), if that is more your cup of tea. But Docker currently doesn't support FreeBSD or Solaris, so you will need to use the native tools for those.
Now you can run docker natively on windows
See http://docs.master.dockerproject.com/installation/windows/
And
http://azure.microsoft.com/blog/2015/04/16/docker-client-for-windows-is-now-available?Ocid=OutgoingPromotion_Social_TW_Azure_20150416_169251868&linkId=13596123
Starting in june 2016, Docker can be run on Microsoft's Hyper-V virtualization on Windows 10 hosts. This is now the preferred and "official" way to run Docker on Windows.
https://docs.docker.com/engine/installation/windows/
Hyper-V is a Type-1-Hypervisor, meaning docker will run one layer closer to the host hardware and perform significantly faster than boot2docker (which uses VirtualBox, a Type-2-Hypervisor, running inside the host OS).
The performance benefit for docker has a downside too: Enabling Hyper-V will prevent hardware virtualization features for Type-2-Hypervisors, therefore existing VirtualBox images can not be used with VTx, and you might want to consider moving other virtualized OSes to Hyper-V as well.
Windows 7-8.1 hosts can still use boot2docker to run Docker containers, but the main development focus for Docker on Windows is the "new" Hyper-V-Docker.
Hyper-V is only on Windows Pro. Install it for £110.
Or simply install Vagrant, install VirtualBox, install GIT bash, then from your GIT bash terminal.
git clone git#github.com:danday74/vagrant-docker-skelly.git
cd vagrant-docker-skelly
vagrant up # takes approx 5 mins to create VM
vagrant ssh
docker -v
docker-compose -v
The Vagrantfile shows that:
1 - this is a Xenial VM with docker and compose installed on it
2 - Ports mapped from Host to the VM are 9900-9920
3 - The shared folder is shared from host to VM
Tweak this as desired.
I got tired fighting with a maven docker plugin so I figured I would be able to fake it. This is how:
Using boot2docker and the following bat file makes it look like you're running docker natively. Place it on your path.
#set SSH="C:\Program Files (x86)\Git\bin\ssh.exe"
#set RUN_REMOTE='docker %*'
# %SSH% -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -p 2022 -i %HOMEPATH%/.ssh/id_boot2docker -tt docker#localhost %RUN_REMOTE%
The ssh.exe comes from the msys-git package which is bundled with boot2docker.
I'm pretty sure this solution have quite a few caveats, but it works pretty good for me.
Place this file on your path and bob's yer uncle.

Better performance from windows virtualboxes on ubuntu or from ubuntu virtualboxes on windows

I am planning to develop an automated test solution with multiple windows machines and multiple ubuntu machines that perform related/interdependent tasks. To start the project, I'd like to have one or two windows machines (virtual) and a few ubuntu machines (virtual) running on a single desktop. It seems likely that I will be pushing a single desktop to the limit here so I am trying to guess if I will have better luck if my host OS is ubuntu or if it is Windows 7. I would be able to use the host OS as one of the machines in my environment. The desktop is some sort of above average Dell, but nothing really impressive.
Does anyone have any insight here? I've worked mostly with VMWare in the past and my host was windows along with my VMs.
Note: VirtualBox is a type-2 hypervisor (it runs on the host OS, not on the hardware like a type-1 hypervisor) and tends to offer weaker performance than, for example, Hyper-V, ESX or XEN (type-1 hypervisors).
Therefore, if performance is a considerable concern, you may squeeze more juice out of Win8 or Windows Server 2012 box running, for example, Hyper-V. Further reading on this here and here (YMMV).
How your environment will run when hosted by a Windows vs. a Linux box is, frankly impossible to tell. I suggest you build your VM's and try dual-booting your machine in Windows and Linux and measuring your scenario. Be sure to have enough RAM in the host to allocate enough working RAM to each VM and enough IO throughput that your host doesn't end up dragging the perf of all VM's down if one VM saturates the machine's IO.
One last note of caution though: Don't completely trust fine-grained perf results measured on VM's - even the best hypervisors cannot truly replicate the perf' characteristics of code running on bare-metal. Treat your measurements as a guideline only.
Measure, then measure again. Measure again just to be sure ... and THEN tweak your config and re-measure, measure, measure ;)
My $0.02:
If its VirtualBox you are using I would go with Ubuntu for certain. I have an AMD 945 Phenom with 16GB of Ram with 12.04LTS 64bit . I can usually have 2 VM's running Windows and / or Ubuntu guests and never consume more than 7 GBs of RAM . If your running them in a testing solution you could expect to probably see 12 maybe 13 GBs of RAM, but the CPU might be your problem. My AMD Phenom runs great, but would be maxed out for sure. I use VMWare at work and on my Laptop and would recommend that if you were running a Windows Host. I also have VMWare on my Ubuntu host, but it just does not run as well as it does on Windows., at least for me.

Resources