Why is Vagrant not considered an isolation, and Docker is, when Vagrant run a new OS and isolates everything in there? What is meant by isolation when one says: "if you're looking for isolation, use Docker"?
Isolation in the meaning "only isolation" i.e., not virtualization. When you want to run linux apps on linux, we are talking about isolation; when you want to run any app on top of any os, then we talk about virtualization.
Where did you read that Vagrant was not considered an isolation?
Actually, this statement is true as Vagrant is not a Container backend nor a VirtualMachine. It is a manager. It can manage VirtualBox, VMWare and now Docker. Depending on what your needs are, you can achieve isolation through Vagrant via VirtualBox or Docker, but Vagrant does not provide the isolation by itself.
Now that Vagrant supports Docker, you can use it if you need to; however, Docker is very simple by itself and IMHO does not require a tools like Vagrant. When you are playing with Virtual machines, on the other hand, Vagrant is very helpful.
Vagrant is just a tool to create virtual machines (or even cloud instances and Docker containers). Vagrant itself does nothing for isolation. However, the tools it can handle (such as virtual machines or Docker) can be used for isolation (but also for many other things, isolation is just one of many aspects).
For some enlighment about the difference between Docker and VMs see How is Docker different from a normal virtual machine?
Docker: separates the application from the underlying Operating System that it runs on.
Docker virtualise the Operating System for the application.
Vagrant is a virtual machine manager, so let's compare the Virtual Machines to Docker.
Virtual Machines: separates the Operating System from the underlying hardware that it runs on.
virtual machine virtualise the hardware for the operating system.
Related
I am using docker desktop on windows 10. I downloaded an image for windows server core 1909.
then created two containers from the same image.
Docker run -it mcr.microsoft.com/windows/servercore:1909 powershell.exe
when I ran sysinfo on both, it gave me different hostname for both OS.
how do I see that kernel is shared? because I see these are two different VMs which is no different than hyper-v VM of the core OS.
I though docker container is sharing a kernel but I don't see the same OS underneath?
any idea?
Ok i got the answer also. There is concept of hyper-v isolation level for containers in windows. so if the host is not same version as of the container, you will get what you call hyper-v isolation which is essentially not a process isolation, its rather a virtual machine like traditional thing. no shared kernel.
True container concept which is actually shared kernel is only possible on windows server host and when used container version which is also same.
https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-1909%2Cwindows-10-1809
As they say, containers do not run on the Hypervisor layer so what happens when we run Container on an AWS EC2 instance? Since EC2 itself is running on a hypervisor layer managed by AWS.
Can someone please help me understand this better?
Thanks,
You probably need to provide more information to back up your question. In the gaps, there are a lot of interpretations for what you are asking, so in a quick return to the basics:
A Hypervisor presents to an Operating System what looks like an independent machine that it can execute on. It might have some weird drivers (aka virtio) but it is just a machine.
A Container presents to a Program what looks like an independent Operating System. It might have some weird IP addresses (127.x.y.z), and stuff, but it looks like it has its own OS.
Note that with [2], the Program can start a whole bunch of other Programs, so as far as it is concerned, it has an entire OS to itself. In reality it is sharing its machine with other Containers, each of which think they have their own machine, and some Root machine which is hosting the Containers.
Containers can host other Containers (well, at least in theory).
Hypervisors can host other Hypervisors (in practise, but your mileage might vary).
So, a Container is an Operating System Instance, running within an Operating System, which might be running within a Virtual Machine on a Hypervisor which is running several Virtual Machines each of which might be running Operating systems which are hosting multiple Containers.
If it strikes you that this is why we can’t have nice things, yes, you are right.
My question is: If you use Docker tool box (that is required for windows 10 home to run Docker) you are essentially using a virtual machine (vm)?
If you are using a vm already the only reason to use docker from that point is to save on many more multiple instances?
Meaning if you only want 1 extra (guest instance): you can have a vm. Though, with docker (toolbox on windows 10 home) you would have 1 vm and it runs docker?
The only way that is useful is if you want many more instances as in: 1 vm + 1 docker or + 1000 more dockers?
Or am I missing something?
Yes, docker toolbox uses Oracle VirtualBox cause Windows 7, 8, and Windows 10 home cannot use Hyper V. And yes, If you are using a VM already the only reason to use docker from that point is to save on many more multiple instances but it also allows easy backup and deployment. But you are losing a decent amount of memory when running a VM and then even more when you are running docker.
So although Docker CE will tell you your Windows doesn't support Hyper-V, this isn't always the case (if you check in System Info you might have Hyper-V enabled, if you're on an Insider build or many builds on GPU computers after Anniversary update then you probably have Hyper-V on Windows 10 Home). There are a few workarounds until the Docker team addresses this issue.
You could use Docker from inside WSL (Windows Subsystem for Linux). Microsoft claims WSL accesses everything directly without Hyper-V so this should be theoretically at the same speed. Of course you can't use your GPU at all because of limitations with GPU passthrough on WSL, which you can ask to be resolved here.
You can also use Docker Toolbox as the other answer stated with Virtualbox, but this will be inherently much slower as you're virtualizing a container inside a virtualized container. You should be able to theoretically get GPU support through this, as well as other features e.g. GUI that you wouldn't be able to with WSL.
To answer the "usefulness" portion of the question:
It's also useful if you run code on a server, but need to develop/debug/update it. You want to test it locally, but to make sure the environment in which it executes is the same (to avoid unexpected, environment specific behavior), you use Docker both locally and on the server. In such a case, even though it's slow, I'll spin up a VM on my W10 Home laptop and run Docker in it.
The greatest feature of the Windows 10 Home May 2020 Update is Windows Subsystem for Linus 2. You can docker in it without the need for a complete virtual machine as in Virtual Box.
Install Docker Desktop that it will automatically indentify WSL2.
QUESTION
For instance-checking, does docker provide same level of abstraction as a virtual machine?
BACKGROUND
I have some software that is license-limited to one instance per machine. I know that if I install N virtual machines, I can have N instances of this software running on the same machine.
Is the same true of docker? Does it trick this tool's instance-checking mechanism?
Docker is a container. It holds all resources required to run an application that can be installed. My understanding is that this extends to all of Userspace, but does not encroach on Kernelspace. Therefore any Linux kernel which supports the included software in a Docker image can run that docker image. The kernel itself is shared between applications, and does not virtualize kernel operations for each docker image (One kernel, many containers).
A VM host may spin up a/many VM(s) that can host a/many Docker image(s), while the host (With the proper supporting packages) may also natively run a/many docker image(s) without a VM running.
I am running a game on Windows, and it requires every OS can only run one. And If I want to run more, currently I open vmware and run the game inside. But the problem is it takes too much memory and disk to run a whole another virtual OS. I know docker will reduce this, but it doesn't seem to support Windows.
Am I right? If so, any other solutions?
Docker uses LXC (linux containers) so cannot run a Windows operating system.
You can use docker on Windows using a boot2docker VM, but this is not the same as docker running a Windows operating system (your containers will run unix based operating systems inside the boot2docker VM).
To do what you're after, you'll need to use separate VMs.