Firstly, is it correct to compare these two products? If yes, what is the main difference between them?
CoreOS is a Linux based operating system which includes some distributed systems tools like etcd, locksmith, flannel, and the orchestration tool fleet,.
Mesos is an orchestration system which runs atop a Linux operating system and handles the scheduling, fault tolerance, and scaling of an application or series of applications.
Mesos can run atop CoreOS in a series of containers (https://github.com/veverjak/coreos-mesos-marathon), and CoreOS can be used without Mesos.
Related
As they say, containers do not run on the Hypervisor layer so what happens when we run Container on an AWS EC2 instance? Since EC2 itself is running on a hypervisor layer managed by AWS.
Can someone please help me understand this better?
Thanks,
You probably need to provide more information to back up your question. In the gaps, there are a lot of interpretations for what you are asking, so in a quick return to the basics:
A Hypervisor presents to an Operating System what looks like an independent machine that it can execute on. It might have some weird drivers (aka virtio) but it is just a machine.
A Container presents to a Program what looks like an independent Operating System. It might have some weird IP addresses (127.x.y.z), and stuff, but it looks like it has its own OS.
Note that with [2], the Program can start a whole bunch of other Programs, so as far as it is concerned, it has an entire OS to itself. In reality it is sharing its machine with other Containers, each of which think they have their own machine, and some Root machine which is hosting the Containers.
Containers can host other Containers (well, at least in theory).
Hypervisors can host other Hypervisors (in practise, but your mileage might vary).
So, a Container is an Operating System Instance, running within an Operating System, which might be running within a Virtual Machine on a Hypervisor which is running several Virtual Machines each of which might be running Operating systems which are hosting multiple Containers.
If it strikes you that this is why we can’t have nice things, yes, you are right.
I am new to Hadoop and don't know the reason why a virtual machine (VM) is used to run and deploy Hadoop cluster and its modules?
Can we not use Hadoop through the local Linux/Unix system
the reason why a virtual machine (VM) is used to run and deploy Hadoop cluster and its modules
Because lots of data centers have more virtual space than physical space. Thousands of servers can run on hundreds of machines (approximately). That's what any Hadoop Cluster in the cloud would be - a bunch of virtualized machines.
Because some companies just want a small, cheap proof of concept that Haddon will work within their ecosystem of existing software.
Because it's makes an easy demo to boot up a VM rather than carry around several machines.
etc...
Anyways, I'd say it's strongly encouraged to use physical hardware, but that costs time and resources to maintain in terms of money and dealing with hardware failures and keeping software patched between Hadoop and the OS. Primarily you'd want to be able to pick and choose hardware that suits your use cases. Lots of storage for a "data lake" or lots of memory for fast processing. Mix in some SSD for fast caching...
Sure, VMs let you dynamically allocate some of those items, but when a disk or memory stick goes corrupt, it affects all VMs on one machine rather than one server
I have a cluster of machines running windows server 2012R2.
I would like to manage them with mesos.
To the best of my knowledge, microsoft is actively contributing to mesos (DC/OS) and will support containers natively on windows server 2016. Furthermore, it looks like there is another type of container flavour using hyper-v.
I can run my mesos masters on linux hosts. However I need my slaves on windows server 2012R2 hosts. It is not clear to me which technologies are already available (and production-ready) for my windows server version.
What are my options to use mesos to manage the resources of my windows server machines ?
Is the mesos-agent for windows (server 2012 R2) production ready ?
Can I use containers (hyper-v or docker) ? If not, is the resource isolation working in Windows (in linux you can use cgroups) ?
Can I run any framework I like or there are some not compatible with windows ?
Mesos version 1.0.0 was recently released that allows you to run the slave and launcher on windows. Not the master unfortunately. Its still Linux, but it doesn't really ever need to be Windows? The slave was the important bit for bringing Windows machines into the Mesos domain.
I've just been investigating using the Mesos-Slave on windows. Pleased to say that it appears to be working OK (this opinion is subject to change as I'm still testing it). Production ready is something any business would have to decide for themselves.
Mesos have always had their own isolation technology, interestingly they have redone their own containerizer implementation and this now takes a number of container image formats, so you can use your Docker images as well as a few others, so this is going to suit you. There was a good presentation on this at MesosCon https://www.youtube.com/watch?v=rHUngcGgzVM
Docker's been stealing the show to some extent. But if you use Mesos-Agent, Windows 2016 and its container technology (Docker) isn't needed and therefore it should run on Windows 2012. I've not got around to trying this yet but its definitely a test worth trying, it opens up deployment options. Anyone?
One thing to remember about containers, they are not VM's. The guest image must be a derivative of the hosts OS, you can't run a Linux image on a Windows machine. Causing me a headache, I can't use servernano at the moment, so my image sizes are 4Gb+, the initial deploy time is hours.
Why is Vagrant not considered an isolation, and Docker is, when Vagrant run a new OS and isolates everything in there? What is meant by isolation when one says: "if you're looking for isolation, use Docker"?
Isolation in the meaning "only isolation" i.e., not virtualization. When you want to run linux apps on linux, we are talking about isolation; when you want to run any app on top of any os, then we talk about virtualization.
Where did you read that Vagrant was not considered an isolation?
Actually, this statement is true as Vagrant is not a Container backend nor a VirtualMachine. It is a manager. It can manage VirtualBox, VMWare and now Docker. Depending on what your needs are, you can achieve isolation through Vagrant via VirtualBox or Docker, but Vagrant does not provide the isolation by itself.
Now that Vagrant supports Docker, you can use it if you need to; however, Docker is very simple by itself and IMHO does not require a tools like Vagrant. When you are playing with Virtual machines, on the other hand, Vagrant is very helpful.
Vagrant is just a tool to create virtual machines (or even cloud instances and Docker containers). Vagrant itself does nothing for isolation. However, the tools it can handle (such as virtual machines or Docker) can be used for isolation (but also for many other things, isolation is just one of many aspects).
For some enlighment about the difference between Docker and VMs see How is Docker different from a normal virtual machine?
Docker: separates the application from the underlying Operating System that it runs on.
Docker virtualise the Operating System for the application.
Vagrant is a virtual machine manager, so let's compare the Virtual Machines to Docker.
Virtual Machines: separates the Operating System from the underlying hardware that it runs on.
virtual machine virtualise the hardware for the operating system.
What is the difference between KVM and Linux Containers (LXCs)? To me it seems, that LXC is also a way of creating multiple VMs within the same kernel if we use both "namespaces" and "control groups" features of kernel.
Text from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Resource_Management_and_Linux_Containers_Guide/sec-Linux_Containers_Compared_to_KVM_Virtualization.html Copyright © 2014 Red Hat, Inc.:
Linux Containers Compared to KVM Virtualization
The main difference between the KVM virtualization and Linux
Containers is that virtual machines require a separate kernel instance
to run on, while containers can be deployed from the host operating
system. This significantly reduces the complexity of container
creation and maintenance. Also, the reduced overhead lets you create a
large number of containers with faster startup and shutdown speeds.
Both Linux Containers and KVM virtualization have certain advantages
and drawbacks that influence the use cases in which these technologies
are typically applied:
KVM virtualization
KVM virtualization lets you boot full operating systems of different
kinds, even non-Linux systems. However, a complex setup is sometimes
needed. Virtual machines are resource-intensive so you can run only a
limited number of them on your host machine.
Running separate kernel instances generally means better separation
and security. If one of the kernels terminates unexpectedly, it does
not disable the whole system. On the other hand, this isolation makes
it harder for virtual machines to communicate with the rest of the
system, and therefore several interpretation mechanisms must be used.
Guest virtual machine is isolated from host changes, which lets you
run different versions of the same application on the host and virtual
machine. KVM also provides many useful features such as live
migration. For more information on these capabilities, see Red Hat
Enterprise Linux 7 Virtualization Deployment and Administration Guide.
Linux Containers:
The current version of Linux Containers is designed primarily to
support isolation of one or more applications, with plans to implement
full OS containers in the near future. You can create or destroy
containers very easily and they are convenient to maintain.
System-wide changes are visible in each container. For example, if you
upgrade an application on the host machine, this change will apply to
all sandboxes that run instances of this application.
Since containers are lightweight, a large number of them can run
simultaneously on a host machine. The theoretical maximum is 6000
containers and 12,000 bind mounts of root file system directories.
Also, containers are faster to create and have low startup times.
source
LXC, or Linux Containers are the lightweight and portable OS based virtualization units which share the base operating system's kernel, but at same time act as an isolated environments with its own filesystem, processes and TCP/IP stack. They can be compared to Solaris Zones or Jails on FreeBSD. As there is no virtualization overhead they perform much better then virtual machines.
KVM represents the virtualization capabilities built in the own Linux kernel. As already stated in the previous answers, it's the hypervisor of type 2, i.e. it's not running on a bare metal.
This whitepaper gives the difference between the hypervisor and linux containers and also some history behind the containers
http://sp.parallels.com/fileadmin/media/hcap/pcs/documents/ParCloudStorage_Mini_WP_EN_042014.pdf
An excerpt from the paper:
a hypervisor works by having the host operating
system emulate machine
hardware and then bringing up
other virtual machines (VMs)
as guest operating systems on
top of that hardware. This
means that the communication
between guest and host
operating systems must follow
a hardware paradigm (anything
that can be done in hardware
can be done by the host to the
guest).
On the other hand,
container virtualization (shown
in figure 2), is virtualization at
the operating system level,
instead of the hardware level.
So each of the guest operating
systems shares the same kernel, and
sometimes parts of the operating system, with
the host. This enhanced sharing gives
containers a great advantage in that they are
leaner and smaller than hypervisor guests,
simply because they're sharing much more of
the pieces with the host. It also gives them the
huge advantage that the guest kernel is much
more efficient about sharing resources
between containers, because it sees the
containers as simply resources to be
managed.
An example:
Container 1 and Container 2 open the same file, the host kernel opens the file and
puts pages from it into the kernel page cache. These pages are then handed out to
Container 1 and Container 2 as they are needed, and if both want to read the same
position, they both get the same page.
In the case of VM1 and VM2 doing the same thing, the host opens the file (creating
pages in the host page cache) but then each of the kernels in VM1 and VM2 does
the same thing, meaning if VM1 and VM2 read the same file, there are now three
separate pages (one in the page caches of the host, VM1 and VM2 kernels) simply
because they cannot share the page in the same way a container can. This
advanced sharing of containers means that the density (number of containers of
Virtual Machines you can run on the system) is up to three times higher in the
container case as with the Hypervisor case.
Summary:
KVM is a Hypervisor based on emulating virtual hardware. Containers, on the other hand, are based on shared operating systems and is skinnier. But this poses a limitation on the containers that that we are using a single shared kernel and hence cant run Windows and Linux on the same shared hardware