Docker Machine vs Virtual Machine - docker-machine

I've been reading recently about docker machine , and while I get roughly what's the general idea , I'm still not very clear about some points, specifically :
when you create a regular VM from Vmware, you define : RAM,CPU,HDD,etc or in the case of cloud, the instance type/size.. however , with docker-machine seems you only specify driver and machine name ... how does docker machine know what instance type/size or hardware specs to use ?
How is connecting using docker-machine different from doing SSH directly into my VMs ?

After reading more on the online documentation , I found that the different VM drivers include some commands for defining RAM , CPU , HDD (--virtualbox-cpu-count , --virtualbox-memory , and --virtualbox-disk-size ) , else it'll use some default values (i.e : 1 CPU)
But more importantly the iso this product uses is Boot2Docker, which is "DEPRECATED" , also with more powerful orchestration tools out there ... I guess there's no room for Docker Machine (R.I.P)

Related

Ansible: How to remove & register a VM from VCenter?

I need to do a vmotion and I would need:
Unregister machine from VMWare VCenter.
Register that machine into another cluster.
Is it possible to do with an Ansible module?
Thanks in advance!
Among the same cluster, have you tried :
vmware_vmotion – Move a virtual machine using vMotion, and/or its vmdks using storage vMotion ?
but you're talking about 'another cluster' so i think it won't work :
as said in Live Migration of Virtual Machines
VMware vSphere vMotion is a zero downtime live migration of workloads
from one server to another. This capability is possible across
vSwitches, Clusters, and even Clouds
So you are not doing vMotion by migrating from a cluster to another (you will have down time).
What you want to do is stop vm, download vmdk files from a cluster, upload it to another, and restore a vm with it (but i never did that, i only have one cluster)
As far as I can tell, when using vmware_guest, setting a VM state to "absent" will destroy it instead of unregistering it.
Similarly, I don't see any way to add a VM back to the inventory by way of a VMX file.

Is autoscaling possible in vagrant?

I am a newbie to vagrant. So far i know how to create multiple machines and provision them using a single vagrantfile. Currently i am working on a project which requires auto-scaling feature for an application. I am creating 3 VMs and provisioning them using chef. I would like to know is there way to create 4th vagrant VM and provision it at runtime when load increases on all 3 VMs (i.e. auto-scaling). I am using HAproxy as load balancer as my first VM.
Thanks in advance.
there's no reason what you could not provision your 4th VM automatically but there's no auto scaling feature built in with Vagrant.
Basically you will need to build a script to check the load on the VM or the load on your application, depending on which threshold you want to trigger a new VM.
There's no built-in capacity as:
monitoring the load of the VM will be OS specific. do you want to turn a new VM when CPU/RAM reach a peak, you will need to check
monitoring the load on your application would require you to monitor again depending your stack/framework of your app
Vagrant is a tool for development and testing. It is not a production provisioning solution. Look at tools like Terraform, SparkleFormation, and CloudFormation.

Can I create Hadoop cluster with single VM?

I am an experienced person in Java and wanted to get my hands dirty with Hadoop. I have gone through the basics and now preparing for the practical things.
I have started with the tutorials given at https://developer.yahoo.com/hadoop/tutorial/ to setup and running hadoop on virtual machine.
So, to create a cluster I need multiple virtual machine running in parallel. right? And needs to add ip address of all in hadoop-site.xml. Or can I do it with single virtual machine?
No you cannot create a cluster with single VM. Cluster meaning is group of machines.
If you have a good configuration of Host machine, on top of that you can run 'n' number of guest OS. By doing like this only you can create Hadoop cluster (1 NN, 1 SNN, 1 DN)
If you want, you can install Pseudo mode (all services run in one machine) Hadoop, which runs like a testing machine
You can setup a multinode cluster using any virtual box like Oracle VM. Create 5 nodes(1-NN,1-SNN,3-DN). Assign each node its IP address and set up all the configuration files on all the nodes. There are 2 files - (Masters and slave). In the NN node give the IP address of SNN in Master file and all the 3 DN's IP address in the slave files. Also set up the ssh connectivity between all the nodes using the public keys.

How make a cluster of CoreOS on my local infrastructure?

I have some professional servers, and I want to create a cluster of 7-15 machines with CoreOS. I'm a little familiar with Proxmox, but I'm not clear about how create a virtual machine(VM) with CoreOS on proxmox. Also, I'm not sure if the idea of cluster of CoreOS's VM on proxmox it's right to do.
Then, I need:
How create a VM with CoreOS on proxmox.
If will be viable proxmox to create CoreOS's cluster.
I have no experience with Proxmox, but if you can make an image that runs then you can use it to stamp out the cluster. What you'd need to do is boot the ISO, run the installer and then make an image of that. Be sure to delete /etc/machine-id before you create the image.
CoreOS uses cloud-config to connect the machines together and configure a few parameters related to networking -- basically anything to get the machines talking to the cluster. A cloud-config file should be provided as a config-drive image, which is basically like mounting a CD-ROM to the VM. You'll have to check the docs on Proxmox to see if it supports that. More info here: http://coreos.com/docs/cluster-management/setup/cloudinit-config-drive/
The other option you have is to skip the VMs altogether and instead of using Proxmox, just boot CoreOS directly on your hardware. You can do this by booting the ISO and installing or doing something like iPXE: http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/

How many docker containers can i run simultaneously on single host?

I am new to lxc and docker. Does docker max client count depend solely on CPU and RAM or are there some other factors associated with running multiple containers simultaneously?
As mentioned in the comments to your question, it will largely depend on the requirements of the applications inside the containers.
What follows is anecdotal data I collected for this answer (This is on a Macbook Pro with 8 cores, 16Gb and Docker running in VirtualBox with boot2docker 2Gb, using 2 MBP cores):
I was able to launch 242 (idle) redis containers before getting:
2014/06/30 08:07:58 Error: Cannot start container c4b49372111c45ae30bb4e7edb322dbffad8b47c5fa6eafad890e8df4b347ffa: pipe2: too many open files
After that, top inside the VM reports CPU use around 30%-55% user and 10%-12% system (every redis process seems to use 0.2%). Also, I get time outs while trying to connect to a redis server.

Resources