I have an 8-cpu server and I installed Centos 7 on it. I would like to dynamically and programmatically spin up and down VM nodes to do work, ex. Hadoop nodes.
Is the technology I use for this Vagrant or Puppet, or something else? I have played around with Vagrant, but it appears that every new node requires a new directory in the file system, I can't just spin up a new VM as far as I can tell with an API call, I think. And it doesn't look like there's even a real API for Vagrant, just machine-readable output. And if I understand it properly, Puppet deals with configuration management for pre-existing nodes.
Is either of these the correct technology to use or is there something else that is more fitting to what I want to do?
Yes, you can use vagrant to spin up a new vm. Configuration of that particular vm can be done using puppet. Take a look at: https://www.vagrantup.com/docs/provisioning/puppet_apply.html
And if you're problem is having separate directories for each vm, you're looking for a multimachine setup: https://www.vagrantup.com/docs/multi-machine/
For an example using the multiserver setup take a look at https://github.com/mlambrichs/graphite-vagrant/blob/master/Vagrantfile
In the config directory you'll find a yaml file which defines an array that you can use to loop over different vm's.
Related
As per Hashicorp documentation on Nomad+Consul, consul service mesh cannot be run on MacOS/Windows, since it does not support bridge network.
https://www.nomadproject.io/docs/integrations/consul-connect
What is the recommended way to setup a local development environment for Nomad+Consul?
I'd suggest to have a look at setting up your local environment using Vagrant (which is also a product for Hashicorp) and Virtual box. There are plenty examples online, for example
Here is one of the most recent setup with Nomad and Consul, although it is not parametrised much.
Here is one with the core Hashicorp stack, i.e. Nomad, Vault and Consul. This repo is quite old but it merely means that it uses old versions of binaries, which should be easy to update.
Here is one with only Vault and Consul, but you can add Nomad in a similar way. In fact, this Vargrant setup and how files are structured seems to me pretty close to the one above
I've run the first two previous week with a simple
vagrant up
and it worked almost like a charm. I think, I needed to upgrade my VirtualBox and maybe run vagrant up multiple times because of some weird run time errors which I didn't want to debug)
Once Vagrant finishes build you can
vagrant ssh
to get inside created VM, although configs are setup with mounting volumes/syncing files and all UI components are also exposed at the default ports.
I can't seem to find any documentation anywhere on how this is possible.
If I knife bootstrap a new AWS instance, and then a few weeks later that machine goes offline, I would like my Chef server to detect this, and bootstrap a new machine.
I understand how to make chef create a new AWS instance, and bootstrap the instance.
I do not know how to make chef detect that a previously deployed box is no longer available.
I can use the chef API to search for existing nodes. But I do not know how to check that those nodes are still accessible over network, or how to run this check regularly.
I believe I am missing something simple? Most resources I have found on this issue assume that this doesn't need to be discussed, as it is self-evident?
I am a newbie to vagrant. So far i know how to create multiple machines and provision them using a single vagrantfile. Currently i am working on a project which requires auto-scaling feature for an application. I am creating 3 VMs and provisioning them using chef. I would like to know is there way to create 4th vagrant VM and provision it at runtime when load increases on all 3 VMs (i.e. auto-scaling). I am using HAproxy as load balancer as my first VM.
Thanks in advance.
there's no reason what you could not provision your 4th VM automatically but there's no auto scaling feature built in with Vagrant.
Basically you will need to build a script to check the load on the VM or the load on your application, depending on which threshold you want to trigger a new VM.
There's no built-in capacity as:
monitoring the load of the VM will be OS specific. do you want to turn a new VM when CPU/RAM reach a peak, you will need to check
monitoring the load on your application would require you to monitor again depending your stack/framework of your app
Vagrant is a tool for development and testing. It is not a production provisioning solution. Look at tools like Terraform, SparkleFormation, and CloudFormation.
Our infrastructure is getting pretty complex with many moving pieces so I'm setting up Vagrant with Ansible to spin up development environments.
My question is who (Vagrant or Ansible or another tool) should be responsible for starting various such as
rails s (for starting rails server)
nginx
nodejs (for seperate API)
I think the answer you're looking for is Ansible (or another tool).
Vagrant has capabilities to run scripts and start services. Once you add a configuration management tool, it should do exactly that. That's part of its job: starting and managing services.
You want the same application configuration regardless of the machine you're spinning up (ESXi, Amazon EC2, Vagrant, whatever), and the best way to do that is outside of Vagrant.
I have some professional servers, and I want to create a cluster of 7-15 machines with CoreOS. I'm a little familiar with Proxmox, but I'm not clear about how create a virtual machine(VM) with CoreOS on proxmox. Also, I'm not sure if the idea of cluster of CoreOS's VM on proxmox it's right to do.
Then, I need:
How create a VM with CoreOS on proxmox.
If will be viable proxmox to create CoreOS's cluster.
I have no experience with Proxmox, but if you can make an image that runs then you can use it to stamp out the cluster. What you'd need to do is boot the ISO, run the installer and then make an image of that. Be sure to delete /etc/machine-id before you create the image.
CoreOS uses cloud-config to connect the machines together and configure a few parameters related to networking -- basically anything to get the machines talking to the cluster. A cloud-config file should be provided as a config-drive image, which is basically like mounting a CD-ROM to the VM. You'll have to check the docs on Proxmox to see if it supports that. More info here: http://coreos.com/docs/cluster-management/setup/cloudinit-config-drive/
The other option you have is to skip the VMs altogether and instead of using Proxmox, just boot CoreOS directly on your hardware. You can do this by booting the ISO and installing or doing something like iPXE: http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/