I have a hard time finding information about this. Somewhere I've seen news that Docker has now natively been integrated to Windows. So apparently this means they are not "Linux container" but some kind of "Windows containers"? Does anyone have more information on this?
There has been a significant update thanks to many Docker acquisitions, such as Unikernel. Now it is possible to install beta (as of April '16) of Windows program running Docker without any hassle.
Faster and more reliable: no more VirtualBox! The Docker engine is running in an Alpine Linux distribution on top of an xhyve Virtual Machine on Mac OS X or on a Hyper-V VM on Windows, and that VM is managed by the Docker application.
UPDATE (September '17)
Full native support available here.
An integrated, easy-to-deploy development environment for building, debugging and testing Docker apps on a Windows PC. Docker for Windows is a native Windows app deeply integrated with Hyper-V virtualization, networking and file system, making it the fastest and most reliable Docker environment for Windows.
Microsoft has added containerization primitives to the Windows kernel and are helping porting Docker Engine to Windows. That means you can run native Windows containers with Docker on Windows Server 2016. It's been in tech preview for a while and is free to try. Details here: https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/manage_docker?f=255&MSPPError=-2147217396
I have read this:
https://azure.microsoft.com/blog/2015/04/16/docker-client-for-windows-is-now-available/
As you can read there is only interface to manage docker containers inside Linux so far.
Currently (October 2016) there're a mess here.
Windows Server 2016 and Windows 10 build 1607 (Anniversary Update) support Docker containers natively. Obviously only with Windows as base images. Moreover only with Windows Server 2016 (Nano or Core).
But there's also Docker for Windows - which is the only suggested option on https://www.docker.com/products/docker#/windows. It's easily can be thought that that Docker is the one which runs natively on Windows. But it isn't!
Docker for Windows uses a VM with Linux to host all containers. So you can't pull Windows images.
So a try to pull an image will fail with "unknown blog" error:
C:\>docker pull microsoft/nanoserver
Using default tag: latest
latest: Pulling from microsoft/nanoserver
5496abde368a: Retrying in 1 second
94b4ce7ac4c7: Downloading
unknown blob
So Docker for Windows can be used only for Linux images!
How f... it's obvious, right?
For "real native Docker" (to run Windows container) we currently have download and install it manually as described in this manual - https://msdn.microsoft.com/virtualization/windowscontainers/quick_start/quick_start_windows_10
Related
I recently cloned a GitHub repository into my Windows 10 PC. As the code was mostly in C++ it had to be compiled and built to be able to generate a working GUI.
To do so I used WSL, which allowed me to compile, build and run (using CMake), but as WSL doesn't have it's own display I had to use an X11 program (VcXsrv) for visualization. This last one seems to be making the interface rather slow, because the FPS indicator never goes above 15 and I'm told that the native build works at 60 FPS.
I'd like to know if there is a simple workaround that I can try from WSL to make it faster, as my other option is to try and learn Visual Studio.
The code run in WSL Ubuntu 20.04 is:
git clone --recurse-submodules https://github.com/nmwsharp/vector-heat-demo.git
cd vector-heat-demo
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j4
export DISPLAY=localhost:0
./bin/vector_heat /path/to/your/mesh.obj
And on the Windows side, I'm using VcXsrv settings: multiple windows, display 0, start no client, disable native opengl, from this answer.
The reason why it’s slow is the architecture of wsl1. See below.
Comparison WSL1 and WSL2
In WSL1 linux kernel is kind of emulated, there is a software layer which maps linux syscalls to windows syscalls (this means you loose some time when accessing devices) which will lead to low FPS in your case.
In WSL2 microsoft created a hypervisor on which win10 and the microsoft linux kernel is running (your ubuntu, debian, ... runs on top of that), which means WSL2 is way faster because of skipping that translation layer (more information can be found here: differences WSL1 & WSL2
I've also created a guide how to use it with X11 see here (which works for me)
VSCode connect to WSL
it’s just for coding not for running your gui
In this scenario a headless VSCode (Server) is started in your WSL. The VSCode on your Windows will than connect to this instance (meaning you'll have the linux filesystem + gcc toolchain available.
VSCode Remote WSL Documentation from microsoft
This README file provides a link to instructions on how to create the ManageIQ Appliance dev setup for a MAC OSX environment, but it says that Linux instructions are TBD. See Screenshot:
Are we truly limited to MAC OS for development? Are there no instructions out there for setting up in a Linux or Windows environment?
Thank you!
Can one create an MIQ Dev Appliance in Linux or Windows environment?
You can find the detailed guide here for different Linux distros.
Are we truly limited to MAC OS for development?
The main limitation is that Podman client On MacOS doesn't work properly. Since podman a tool for running Linux containers, you gonna need some remote linux machine running, in order to install the remote client and then setup ssh connection information in the podman-remote.conf file. (here)
I'm using Azure DevOps, and have a Linux build pipeline (ubuntu-16.04) setup that builds the code, starts containers with Docker Compose, then runs integration tests using the containers. This all works great.
Now I want to setup a Windows build pipline that does the same thing. However, with both the windows-2019 and win-1803 images, when I do docker stack up, I get this error message:
image operating system "linux" cannot be used on this platform
So, I guess Docker is installed in Windows mode, and thought to switch it to Linux containers using:
DockerCli.exe" -SwitchLinuxEngine
or
"%ProgramFiles%\Docker\Docker\DockerCli.exe" -SwitchLinuxEngine
However, the DockerCli.exe executable doesn't seem to be installed at all.
The only 2 things I can think of are:
Setup a self-hosted build agent
Somehow start the required containers somewhere else
But both of these will be a lot of work to setup, which I really don't need, neither do I want the running costs, or the job of maintaining it.
Are there any workarounds to run Linux containers on hosted Windows build agents?
Run Linux containers in an Azure DevOps Windows hosted build agent
Firstly, see the images listed which installed on Windows hosted agent: Docker images in Windows hosted agent. Docker EE on Server does not support Linux containers at all. So, it is impossible to build Linux docker image on Hosted Win-1803 agent. It can only build Windows docker image.
Until now, the only two workarounds is using self-hosted agent which based on Windows machine, or run a build which has two separate agent jobs(Pass the build artifacts back and forth between one agent job which run on Hosted Linux agent and the other is running on Hosted Windows agent) .
But since these two workarounds are all not convenient for you, there will not any other work around can achieve what you want.
In addition, there has a such suggestion feature raised on our official forum: Support for Docker with Linux Containers on Windows (LCOW) on hosted agent pool . You could vote and comment there, our Product Group team will review these suggestions regularly and consider taking it as Developer Roadmap. If this feature can be true, I think it will be very convenient to build Linux Container and without considering about which agent can only support.
I have been developing and maintaining some windows on windows Docker containers that run ASP.NET core applications on Windows 2016 (using Docker EE) for some time now. I was planning on turning over all ongoing updates/maintenance to server administrators, but I have hit a problem. When I started I believe I was using SAC builds, but now none of the SAC (or LTS for that matter) builds pull on Windows 2016, and though I have spent a good deal of time googling, this whole thing seems to be a big cluster. With docker on Linux, I would just use any LTS distro and apply updates when building the container. Does Microsoft have a clear plan on doing the same? It seems like they are missing the point of docker. I want to run a windows on windows container in windows server 2016, and I want to make sure when I recreate it that I am getting the latest security updates.
https://devblogs.microsoft.com/dotnet/net-core-container-images-now-published-to-microsoft-container-registry/
This page talks about the big changes made in Docker images recently and specifically says the following:
.NET Core images for Nano Server 2016 are still available on Docker Hub and MCR and will not be deleted. You can continue to use them but they are not supported and will not get new updates. If you need to do this and previously used manifest tags, like 1.1-sdk, you can now use the following MCR tags (Docker Hub variants are similar)
Does this mean the new tags listed get updates? I would assume they would tag it with LTS instead of SAC2016 to better convey the notion that they are continuing to update.
This page seems to be really helpful, but none of the images listed pull on windows server 2016:
https://andrewlock.net/exploring-the-net-core-mcr-docker-files-runtime-vs-aspnet-vs-sdk/
This is what I get when I attempt to pull any of the images:
1709: Pulling from windows/nanoserver
no matching manifest for unknown in the manifest list entries
To clarify, I can currently run all my applications using such images as these:
mcr.microsoft.com/dotnet/core/runtime 2.2-nanoserver-sac2016 4a3bbafea836 3 months ago 1.27GB
mcr.microsoft.com/dotnet/core/sdk 2.2-nanoserver-sac2016 9773d80bdd64 3 months ago 2.62GB
I am looking for clarity on support of these images, or a clearer direction to migrate.
Right now, for LTS, the image you want to pull is: mcr.microsoft.com/dotnet/core/aspnet:2.1. Since 2.1 is the LTS release of ASP.NET Core. The underlying server reference doesn't matter, honestly, and all the .NET Core images are multi-arch, so the right underlying images are pulled automatically (linux for linux host, Windows for Windows host, and AMD64, x86, ARM, etc.).
The OS of the image (aside from being the right architecture and platform) is really kind of meaningless. It's mostly a translation layer. Images aren't VMs, the OS is on the host, and that's where your security patches and such apply. As long as your host is patched up, you're good.
UPDATE
This has apparently led to some pedantic arguments in the comments, so let me be a little more clear. What I'm talking about here is best described via this graphic from the Docker site:
Whereas a VM has a copy of the OS on each instance, containers utilize a shared host OS. The OS base image is basically a proxy. It provides the API, but everything at an OS-level happens on the host OS, not in the container.
As such, yes, the OS base image matters to a certain extent. You can't target a Linux base image and deploy to Windows Server. You'd have issues targeting Windows Server 2019 and deploying to 2016, as well. However, assuming that the OS base image is remotely compatible with the host OS, then everything above and beyond that is meaningless.
Specifically to the discussion of patches and LTS versions, you don't need to care, because again, what's actually running is components of the host OS, not anything from the image itself. You can actually see this if you open Task Manager on the host OS. You'll see duplicate system-level processes tied to each running container. Even though the container shows running processes as well, it is these host-level processes that are actually doing the work, and therefore, it is only important that they are patched and supported. If everything is good on your host, you need not worry about the containers, at least for the OS part of things.
https://github.com/docker/for-win/issues/3761
I was working around Mar 12 when all the docker pulls stopped working because of the changes MS did. So I am sure I saw this page before, but on rereading the entire thing again, I see this comment:
docker pull mcr.microsoft.com/windows/servercore:ltsc2016
That seems like a reasonable tag name for long term support. Lo and behold it works. I am currently theorizing that nanoserver is only for the latest and greatest, and am thinking of opening an issue on github to see if someone will answer that definitively.
I think one of the comments on that page from the github maintainer settles the debate in the Chris Pratt's answer. I think mis-information floating around about security is dangerous, so I am reposting here to help future souls who stumble on this question:
Yes, when running with process-isolation, the version must match the Windows kernel version you're running on. Unlike Linux, the Windows kernel does not have a stable API, so container images running on Windows must have libraries that match the kernel on which they will be running to make it work (which is also why those images are a lot bigger than Linux images).
Vulnerable libraries in a docker container DO matter. You cannot rely on the host OS being up to date to protect you.
Further Research
Still researching this, so adding my updates for your benefit as I find them:
Article about Migrating
https://www.altaro.com/hyper-v/nano-server-no-longer-supported-for-infrastructure/
TLDR - Move to servercore
Server 2016 14393 Tags on Docker Hub
The main docker hub nano server page does not list any 14393 tags, but visiting the full tags list at the bottom of the page shows many. I was able to pull mcr.microsoft.com/windows/nanoserver:10.0.14393.1066 and it is only 1GB instead of 14GB for server core
I am unable to run linux containers as I run Docker in my development vm which has visual studio installed and I have not been ablt to get it working yet as it is an unsupported scenario and many of the google solutions have unfortunately not resolved it for me.
However, I was thinking that maybe I could keep Docker Desktop running on my dev vm (so that VS would be happy it was still "local") but actually have the Docker engine/daemon bit running on a remote host where linux containers actually work - is it possible to repoint Docker Desktop for Windows to a Docker engine on a remote host?
Apologies if my terminology is rubbish, I am new to Docker. Also, I have done some searching and have seen articles and messages talking about docker machine and DOCKER_HOST environment variables, but I cannot see anything yet that applies to Docker Desktop (e.g. I don't have that env variable set on my docker windows vm so maybe it doesn't use that mechanism).
Thanks in advance