I am very new to chef. I have added few cookbooks and it seems all is working. I am able to run chef on my ec2 node. Now I am trying to set /etc/hosts file, for that I need IPs. I found that ohai cloud plugin (https://github.com/chef/ohai/blob/master/lib/ohai/plugins/cloud.rb)
can do this. But I don't know how to do that.
Any suggestions?
On EC2 you need to set a hint file in /etc/chef/ohai/hints/ec2.json. The file can be empty (and should be), but it needs to exist to tell Ohai to run the EC2 logic. This is because there is no good way to know if you are on an EC2 VM only looking at the VM itself.
Related
I need a help, I am trying to write bash/shell script which will be placed in Rundeck tool. As my org has more than 10,000 severs Ec2. This is what I am expecting.
script to login into Ec2.
show output of df -h,lsblk & Java version.
Please anyone help me with the script.
You need to configure the EC2 nodes on Rundeck using a model source. To avoid configuring each EC2 node you can use the EC2 Model Source Plugin, take a look at this (check this to learn how to install plugins on Rundeck), after that you can create a job with an inline script step using a node filter pointing to your ec2 nodes.
I can't seem to find any documentation anywhere on how this is possible.
If I knife bootstrap a new AWS instance, and then a few weeks later that machine goes offline, I would like my Chef server to detect this, and bootstrap a new machine.
I understand how to make chef create a new AWS instance, and bootstrap the instance.
I do not know how to make chef detect that a previously deployed box is no longer available.
I can use the chef API to search for existing nodes. But I do not know how to check that those nodes are still accessible over network, or how to run this check regularly.
I believe I am missing something simple? Most resources I have found on this issue assume that this doesn't need to be discussed, as it is self-evident?
I am running chef 13+ on AWS Ubuntu in local mode via EC2 UserData. I have a common role which installs/configures many common things for the organization.
Chef in local mode will create a nodes directory in the repo checkout. It then creates a private-IP.json file that's used for cache.
Everything is fine, I image to an AMI and add to it the LaunchConfig for AutoScaling.
However, in AutoScaling I have to remove that private-IP.json file because I get a new private IP. Thereby effectively deleting all the cache and work done before imaging.
One approach I have in mind is just to rename the file and use some sed magic to replace IP's and hostnames, but I am thinking there much be a better more Chef based approach?
You would generally set the run list via the initial JSON -j or directly via -r for both chef-solo and local mode.
I have an 8-cpu server and I installed Centos 7 on it. I would like to dynamically and programmatically spin up and down VM nodes to do work, ex. Hadoop nodes.
Is the technology I use for this Vagrant or Puppet, or something else? I have played around with Vagrant, but it appears that every new node requires a new directory in the file system, I can't just spin up a new VM as far as I can tell with an API call, I think. And it doesn't look like there's even a real API for Vagrant, just machine-readable output. And if I understand it properly, Puppet deals with configuration management for pre-existing nodes.
Is either of these the correct technology to use or is there something else that is more fitting to what I want to do?
Yes, you can use vagrant to spin up a new vm. Configuration of that particular vm can be done using puppet. Take a look at: https://www.vagrantup.com/docs/provisioning/puppet_apply.html
And if you're problem is having separate directories for each vm, you're looking for a multimachine setup: https://www.vagrantup.com/docs/multi-machine/
For an example using the multiserver setup take a look at https://github.com/mlambrichs/graphite-vagrant/blob/master/Vagrantfile
In the config directory you'll find a yaml file which defines an array that you can use to loop over different vm's.
When autoscaling my EC2 instances for application, what is the best way to keep every instances in sync?
For example, there are custom settings and application files like below...
Apache httpd.conf
php.ini
PHP source for my application
To get my autoscaling working, all of these must be configured same in each EC2 instances, and I want to know the best practice to sync these elements.
You could use a private AMI which contains scripts that install software or checkout the code from SVN, etc.. The second possibility to use a deployment framework like chef or puppet.
The way this works with Amazon EC2 is that you can pass user-data to each instance -- generally a script of some sort to run commands, e.g. for bootstrapping. As far as I can see CreateLaunchConfiguration allows you to define that as well.
If running this yourself is too much of an obstacle, I'd recommend a service like:
scalarium
rightscale
scalr (also opensource)
They all offer some form of scaling.
HTH