I am using docker desktop on windows 10. I downloaded an image for windows server core 1909.
then created two containers from the same image.
Docker run -it mcr.microsoft.com/windows/servercore:1909 powershell.exe
when I ran sysinfo on both, it gave me different hostname for both OS.
how do I see that kernel is shared? because I see these are two different VMs which is no different than hyper-v VM of the core OS.
I though docker container is sharing a kernel but I don't see the same OS underneath?
any idea?
Ok i got the answer also. There is concept of hyper-v isolation level for containers in windows. so if the host is not same version as of the container, you will get what you call hyper-v isolation which is essentially not a process isolation, its rather a virtual machine like traditional thing. no shared kernel.
True container concept which is actually shared kernel is only possible on windows server host and when used container version which is also same.
https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-1909%2Cwindows-10-1809
Related
My question is: If you use Docker tool box (that is required for windows 10 home to run Docker) you are essentially using a virtual machine (vm)?
If you are using a vm already the only reason to use docker from that point is to save on many more multiple instances?
Meaning if you only want 1 extra (guest instance): you can have a vm. Though, with docker (toolbox on windows 10 home) you would have 1 vm and it runs docker?
The only way that is useful is if you want many more instances as in: 1 vm + 1 docker or + 1000 more dockers?
Or am I missing something?
Yes, docker toolbox uses Oracle VirtualBox cause Windows 7, 8, and Windows 10 home cannot use Hyper V. And yes, If you are using a VM already the only reason to use docker from that point is to save on many more multiple instances but it also allows easy backup and deployment. But you are losing a decent amount of memory when running a VM and then even more when you are running docker.
So although Docker CE will tell you your Windows doesn't support Hyper-V, this isn't always the case (if you check in System Info you might have Hyper-V enabled, if you're on an Insider build or many builds on GPU computers after Anniversary update then you probably have Hyper-V on Windows 10 Home). There are a few workarounds until the Docker team addresses this issue.
You could use Docker from inside WSL (Windows Subsystem for Linux). Microsoft claims WSL accesses everything directly without Hyper-V so this should be theoretically at the same speed. Of course you can't use your GPU at all because of limitations with GPU passthrough on WSL, which you can ask to be resolved here.
You can also use Docker Toolbox as the other answer stated with Virtualbox, but this will be inherently much slower as you're virtualizing a container inside a virtualized container. You should be able to theoretically get GPU support through this, as well as other features e.g. GUI that you wouldn't be able to with WSL.
To answer the "usefulness" portion of the question:
It's also useful if you run code on a server, but need to develop/debug/update it. You want to test it locally, but to make sure the environment in which it executes is the same (to avoid unexpected, environment specific behavior), you use Docker both locally and on the server. In such a case, even though it's slow, I'll spin up a VM on my W10 Home laptop and run Docker in it.
The greatest feature of the Windows 10 Home May 2020 Update is Windows Subsystem for Linus 2. You can docker in it without the need for a complete virtual machine as in Virtual Box.
Install Docker Desktop that it will automatically indentify WSL2.
QUESTION
For instance-checking, does docker provide same level of abstraction as a virtual machine?
BACKGROUND
I have some software that is license-limited to one instance per machine. I know that if I install N virtual machines, I can have N instances of this software running on the same machine.
Is the same true of docker? Does it trick this tool's instance-checking mechanism?
Docker is a container. It holds all resources required to run an application that can be installed. My understanding is that this extends to all of Userspace, but does not encroach on Kernelspace. Therefore any Linux kernel which supports the included software in a Docker image can run that docker image. The kernel itself is shared between applications, and does not virtualize kernel operations for each docker image (One kernel, many containers).
A VM host may spin up a/many VM(s) that can host a/many Docker image(s), while the host (With the proper supporting packages) may also natively run a/many docker image(s) without a VM running.
Docker on Windows appears to be what's coming built into Windows Server 2016? and supports running Windows inside a Docker container and using Windows as a Docker container host. Does this support Linux? I don't think so, I think it only supports running Windows Docker containers. This also appears to be maintained by Microsoft.
Docker for Windows appears to be a separate install created by the Docker team to bring Linux Docker to Windows. So Windows can be the Docker host but all containers are still just normal Linux containers. Does this support Windows containers? I don't think so, I think it only supports running Linux Docker containers. This also appears to be maintained by Docker.
One other interesting note is that Docker Tools for Visual Studio appears to only support Docker Desktop for Windows and not Docker on Windows.
What I'm really looking for are the stated differences bettwen the two, some sort of good comparison. What features are each trying to acheive, where are they similar, where are they different. Will they always be different or will they ever come together?
Docker on Windows is a colloquial way to refer to just the Docker Engine running on Windows. I find it helpful to think of this as a Windows Container Host, so yes Windows containers only. This would be what you would run on a Windows Server 2016 machine. So maybe a better name is Docker for Windows Server which I believe people have used as well. I still prefer a Windows Container Host. Which means it only has the Docker Engine at the end of the day, doesn't even need to have any of the Docker clients (docker CLI, docker-compose, etc).
Docker Desktop for Windows is a product meant for running both Linux and Windows containers on Windows. It's not meant for a production environment, and instead is meant for a desktop/client SKU of Windows, hence the Windows 10 requirement. So you could think of this as Docker for Windows 10. Because DfW can run both container types, there are different configurations that it sets up on your machine:
When using Linux Containers, DfW creates a MobyLinuxVM with Hyper-V inside of which it runs Linux containers, transparently, as if they were running on the Windows 10 host.
When using Windows Containers, DfW installs the same components as Docker on Windows so that you have a Windows Container Host. You have the Windows Docker Engine setup now. This then allows you to run windows containers on a Win 10 client SKU.
Theoretically you could install DfW on Windows Server, I haven't tried so I don't know if this would fail, but why would you want to run Linux containers on a Windows Host in production? In production, you would have Linux Container Hosts that run linux containers and Windows Container Hosts that run windows containers, this would avoid overhead and simplify things.
Just to add on top of Wes's answer on Docker for Windows and few details about the experimental LCOW which is what you are looking for a side by side execution of Windows and Linux containers on the Windows host machine.
Right now there are two ways to run Linux containers with Docker for Windows and Hyper-V:
Run Linux containers in a full Linux VM - this is what Docker
typically does today.
Run Linux containers On Windows (LCOW) with Hyper-V isolation - this is a new option in Docker for Windows.
In the 1st approach, Docker for Windows windows will have docker daemon service on the Windows host machine as well as it will be available on Linux MOBY VM. So basically, you will have 2 different docker hosts. One which is running on your Windows Host Machine, Managing only Windows Containers and other which is running on your Linux Moby VM and Managing only Linux Containers.
It is important to note that, All Linux Containers will share a Single Linux Kernel on Moby VM and All Windows Containers will share Single Windows Kernel on Windows Host Machine.
Things are really getting interesting with the 2nd approach,
Linux containers with Hyper-V isolation run each Linux container in an optimized Linux VM with just enough OS to run containers. Each Linux container has its own kernel and its own VM sandbox. They're also managed by Docker on Windows directly.
The main difference here in this approach is that there is only one docker daemon service is running on Windows Host Machine and managing both Windows and Linux containers.
All Windows Containers will Share Single Windows Kernel while Each Linux Container will have its own Linux Kernel
To understand more in details, please refer
https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/linux-containers
I am running a game on Windows, and it requires every OS can only run one. And If I want to run more, currently I open vmware and run the game inside. But the problem is it takes too much memory and disk to run a whole another virtual OS. I know docker will reduce this, but it doesn't seem to support Windows.
Am I right? If so, any other solutions?
Docker uses LXC (linux containers) so cannot run a Windows operating system.
You can use docker on Windows using a boot2docker VM, but this is not the same as docker running a Windows operating system (your containers will run unix based operating systems inside the boot2docker VM).
To do what you're after, you'll need to use separate VMs.
Why is Vagrant not considered an isolation, and Docker is, when Vagrant run a new OS and isolates everything in there? What is meant by isolation when one says: "if you're looking for isolation, use Docker"?
Isolation in the meaning "only isolation" i.e., not virtualization. When you want to run linux apps on linux, we are talking about isolation; when you want to run any app on top of any os, then we talk about virtualization.
Where did you read that Vagrant was not considered an isolation?
Actually, this statement is true as Vagrant is not a Container backend nor a VirtualMachine. It is a manager. It can manage VirtualBox, VMWare and now Docker. Depending on what your needs are, you can achieve isolation through Vagrant via VirtualBox or Docker, but Vagrant does not provide the isolation by itself.
Now that Vagrant supports Docker, you can use it if you need to; however, Docker is very simple by itself and IMHO does not require a tools like Vagrant. When you are playing with Virtual machines, on the other hand, Vagrant is very helpful.
Vagrant is just a tool to create virtual machines (or even cloud instances and Docker containers). Vagrant itself does nothing for isolation. However, the tools it can handle (such as virtual machines or Docker) can be used for isolation (but also for many other things, isolation is just one of many aspects).
For some enlighment about the difference between Docker and VMs see How is Docker different from a normal virtual machine?
Docker: separates the application from the underlying Operating System that it runs on.
Docker virtualise the Operating System for the application.
Vagrant is a virtual machine manager, so let's compare the Virtual Machines to Docker.
Virtual Machines: separates the Operating System from the underlying hardware that it runs on.
virtual machine virtualise the hardware for the operating system.