Need to disable chef-client on Windows Chef - windows

I have a need to provision Windows VMs with knife and run the initial configuration once with Chef... but have the chef-client disabled after that. Unfortunately setting the interval and task variables to 0 in the default.rb attributes file in the chef-client cookbook do not seem to work:
# log_file has no effect when using runit
default['chef_client']['log_file'] = 'client.log'
default['chef_client']['interval'] = '0'
default['chef_client']['splay'] = '0'
default['chef_client']['conf_dir'] = '/etc/chef'
default['chef_client']['bin'] = '/usr/bin/chef-client'
...
# Configuration for Windows scheduled task
default['chef_client']['task']['frequency'] = 'minute'
default['chef_client']['task']['frequency_modifier'] = node['chef_client'] ['interval'].to_i / 0
default['chef_client']['task']['user'] = 'SYSTEM'
default['chef_client']['task']['password'] = nil # Password is only required for none system users
Any ideas on what I need to do?

Just don't run the chef-client recipe at all.

What I ended up doing was altering the windows_service.rb recipe to disable the service:
service 'chef-client' do
action [:disable, :stop]
end
coderanger's answer is probably ok but this will allow an easier answer to events that might need the client to be easily ran in the future if needed.

Related

How to use a Chef Windows reboot resource to reboot only once

I'm currently attempting to use the reboot resource in a chef resource:
reboot 'ADS Install Complete' do
action :nothing
reason 'Cannot continue Chef run without a reboot.'
only_if {reboot_pending?}
end
...
execute 'Initialize ADS Configuration INI' do
command "\"#{node["ads-tfs-ini"]["tfsconfig_path"]}\" unattend \/create \/type:#{node["ads-tfs-ini"]["Scenario"]} \/unattendfile:\"#{node["ads-tfs-ini"]["unattend_file_path"]}\""
only_if { ! "#{ENV['JAVA_HOME']}".to_s.empty? }
notifies :request_reboot, 'reboot[ADS Install Complete]', :delayed
end
I am getting an endless loop of reboots (client reboots-->chef client runs-->chef client reruns the run_list--client reboots-->...). How can I just reboot once?
You could add some validation to check whether the computer has been rebooted once.
ruby_block "reboot" do
unless File.exist?("C:\reboot") do
block do
Chef::Util::FileEdit.new('C:\reboot').write_file
Chef::ShellOut.new("shutdown /r").run_command
end
end
end
This solution isn't really elegant, but it should work. The reboot is inside the ruby block which will only run if C:\reboot DOESN'T exist. If the file doesn't exist, the block will create the file and then call the reboot. On the second chef run, the file will exist so the reboot will not be triggered.
Here is the documention regarding ruby_block
from reboot chef resource:
Use the reboot resource to reboot a node, a necessary step with some installations on certain platforms. This resource is supported for use on the Microsoft Windows, macOS, and Linux platforms.
reboot 'name' do
action :reboot_now
end
Your only_if guard in the execute resource makes execute resource run, if ENV['JAVA_HOME'] is not empty. Very likely, that this environment variable is set and that's why your execute resource is run every time Chef runs, and triggers the reboot.
My guess, is you just actually need an opposite, run the resource, only if the variable is empty. For that you can just remove the ! from the line.
only_if { ENV['JAVA_HOME'].to_s.empty? }
If my previous guess is wrong, then you need to change your only_if guard to something more robust. From the command line, I understand you create some configuration files, so you don't need to run execute resource, when you config files already exist:
not_if { ::File.exist?('/path/to/file/created/by/command') }

Aurora file define host port

Okay, after a week, or more, my Aurora Cluster is running. This was not really easy but, nevertheless, I got it.
I have a simple aurora file
# copy frontend into the local sandbox
clone_service = Process(
name = 'copy service',
cmdline = 'git clone https://citrullin#bitbucket.org/jakiku/frontend.git frontend')
install_npm_deps = Process(
name = 'install npm dependencies',
cmdline = 'cd frontend && npm install'
)
run_server = Process(
name = 'run server',
cmdline = 'node server.js'
)
# describe the task
run_frontend_service = SequentialTask(
processes = [clone_service, install_npm_deps, run_server],
resources = Resources(cpu = 1, ram = 128*MB, disk=64*MB))
jobs = [
Service(cluster = 'mesos-fr',
environment = 'devel',
role = 'www-data',
name = 'frontend_service',
task = run_frontend_service)
]
Nothing special. I want only define which port I need to use. I checked Resources(port = 3000) but it doesn't work. It's not really a resource, it's an attribute in mesos
Generally speaking you want to avoid static ports with Aurora jobs. Since any number of tasks could land on the same host, there's no good way to guarantee that multiple tasks wouldn't request the same port causing one of them to randomly fail.
The recommended way to solve this problem is to request a port from Mesos using the thermos namespace in your aurora config. For example, if you were to do something like:
run_server = Process(
name = 'run server',
cmdline = 'node server.js --port={{thermos.ports[http]}}'
)
Then Aurora will assign a random port to your task when it is assigned to a host.
The obvious question this raises is how do other things find your service if it's running on a randomly assigned port that can change over time as your task is moved around between hosts. The answer to this is service discovery. If you add announce=Announcer() to your job configuration, then your task will be added to a ServerSet which other tasks can use to discover and communicate with it.
Reference:
Mesos documentation on configuring agents to offer ports.
Aurora documentation on requesting ports here.

Issue with SaltStack

Recently I started taking an interest in Salt and begun doing a tutorial on it.I am currently working on a Mac and I having a hard time trying to start the vm[the minion] from my laptop[I am using Vagrant as an application to start the process]
The vagrant file for the vm contains these lines:
# salt-vagrant config
config.vm.provision :salt do |salt|
salt.run_highstate = true
salt.minion_config = "/etc/salt/minion"
salt.minion_key = "./minion1.pem"
salt.minion_pub = "./minion1.pub"
end
even though I wrote this it gets stuck at:
Calling state.highstate... (this may take a while)
Any ideas why?
One more thing.I seem to need to modify the top.sls file at the next step which is located in /srv/salt.Unfortunately I can not find the /srv file anywhere,why is that?is there a way to tell the master that the top file is somewhere else?
If you don't have a top.sls created, then you won't be able to run a hightstate like you have configured with the salt.run_highstate = true line.
If you don't have a /srv/salt/ directory created, then you can just create it yourself. Just make sure the user the salt-master is running as can read it.
The /srv/salt/ directory is the default location of what is known as the file_root. You can modify its location in the master config file /etc/salt/master and modify the file_roots config option.

Make chef cookbook recipe only run once

So I use the following recipe:
include_recipe "build-essential"
node_packages = value_for_platform(
[ "debian", "ubuntu" ] => { "default" => [ "libssl-dev" ] },
[ "amazon", "centos", "fedora", "centos" ] => { "default" => [ "openssl-devel" ] },
"default" => [ "libssl-dev" ]
)
node_packages.each do |node_package|
package node_package do
action :install
end
end
bash "install-node" do
cwd Chef::Config[:file_cache_path]
code <<-EOH
tar -xzf node-v#{node["nodejs"]["version"]}.tar.gz
(cd node-v#{node["nodejs"]["version"]} && ./configure --prefix=#{node["nodejs"]["dir"]} && make && make install)
EOH
action :nothing
not_if "#{node["nodejs"]["dir"]}/bin/node --version 2>&1 | grep #{node["nodejs"]["version"]}"
end
remote_file "#{Chef::Config[:file_cache_path]}/node-v#{node["nodejs"]["version"]}.tar.gz" do
source node["nodejs"]["url"]
checksum node["nodejs"]["checksum"]
notifies :run, resources(:bash => "install-node"), :immediately
end
It successfully installed nodejs on my Vagrant VM but on restart it's getting executed again. How do I prevent this? I'm not that good in reading ruby code.
To make the remote_file resource idempotent (i.e. to not download a file already present again) you have to correctly specify the checksum of the file. You do this in your code using the node["nodejs"]["checksum"] attribute. However, this only works, if the checksum is correctly specified as the SHA256 hash of the downloaded file, no other algorithm (esp. not MD5) is supported.
If the checksum is not correct, your recipe will still work. However, on the next run, Chef will notice that the checksum of the existing file is different from the one you specified and will download the file again, thus notify the install node ressource and do the whole compile stuff.
With chef, it's important that recipes be idempotent. That means that they should be able to run over and over again without changing the outcome. Chef expects to be able to run all the recipes on a node periodically, and that should be ok.
Do you have a way of knowing which resource within that recipe is causing you problems? The remote_file one is the only one I'm suspicious of being non-idempotent, but I'm not sure offhand.
Looking at the Chef wiki, I find this:
Deprecated Behavior In Chef 0.8.x and earlier, Remote File is also
used to fetch files from the files/ directory in a cookbook. This
behavior is now provided by #Cookbook File, and use of Remote File for
this purpose is deprecated (though still valid) in Chef 0.9.0 and
later.
Anyway, the way chef tends to work, it will look to see if whatever "#{Chef::Config[:file_cache_path]}/node-v#{node["nodejs"]["version"]}.tar.gz" resolves to exists, and if it does, it should skip that resource. Is it possible that install-node deletes that file when it's finished installing? If so, chef will re-fetch it every time.
You can run a recipe only once overriding the run-list with -o modifier.
sudo chef-client -o "recipe[cookbook::recipe]"
-o RunlistItem,RunlistItem..., Replace current run list with specified items
--override-runlist
In my experience remote_file always runs when executing chef-client, even if the target file already exists. I'm not sure why (haven't dug into the Chef code to find the exact cause of the bug), though.
You can always write a not_if or only_if to control the execution of the remote_file resource, but usually it's harmless to just let it run every time.
The rest of your code looks like it's already idempotent, so there's no harm in running the client repeatedly.
There's an action you can specify for remote_file that will make it run conditionally:
remote_file 'target' do
source 'wherever'
action :create_if_missing
end
See the docs.
If you want to test whether your recipe is idempotent, you may be interested in ToASTER, a framework for systematic testing of Chef scripts.
http://cloud-toaster.github.io/
Chef recipes are executed with different configurations in isolated container environments (Docker VMs), and ToASTER reports various metrics such as system state changes, convergence properties, and idempotence issues.

Puppet conditional check between vagrant & ec2

I've looked at the documentation for puppet variables and can't seem to get my head around how to apply this to the following situation:
if vagrant (local machine)
phpfpm::nginx::vhost { 'vhost_name':
server_name => 'dev.demo.com',
root => '/vagrant/public',
}
else if aws ec2 (remote machine)
phpfpm::nginx::vhost { 'vhost_name':
server_name => 'demo.com',
root => '/home/ubuntu/demo.com/public',
}
Thanks
Try running facter on both your vagrant host and your EC2 instance, and look for differences. I suspect that 'facter virtual' may be different between the two hosts, or that the EC2 may return a bunch of ec2_ facts that won't be present on the vagrant host.
Then you can use this fact as a top level variable as per below. I switched to a case statement as well, since that's a little easier to maintain IMHO, plus you can use the default block for error checking.
case $::virtual {
'whatever vagrant returns' : {
<vagrant specific provisionin>
}
'whatever the EC2 instance returns' : {
<EC2 specific provisioning>
}
default : {
fail("Unexpected virtual value of $::virtual")
}
}
NOTE: In the three years since this response was posted, Vagrant has introduced the facter hash option. See #thomas' answer below for more details. I believe that this is the right way to go and makes my proposed kernel command line trick pretty obsolete. The rationale for using a fact hasn't changed, though, only strengthened (e.g. Vagrant currently supports AWS provider).
ORIGINAL REPLY: Be careful - you assume that you only use virtualbox for vagrant and vice versa, but Vagrant is working on support for other virtualization technologies (e.g. kvm), and you might use VirtualBox without vagrant one day (e.g. for production).
Instead, the trick I use is to pass the kernel a "vagrant=yes" parameter when I build the basebox, which is then accessible via /proc/cmdline. Then you can create a new fact based on that (e.g. /etc/vagrant file and check for it in subsequent facter runs).
Vagrant has a great utility for providing Puppet facts:
facter (hash) - A hash of data to set as available facter variables
within the Puppet run.
For example, here's a snippet from my Vagrantfile with the Puppet setup:
config.vm.provision "puppet", :options => ["--fileserverconfig=/vagrant/fileserver.conf"] do |puppet|
puppet.manifests_path = "./"
puppet.module_path = "~/projects/puppet/modules"
puppet.manifest_file = "./vplan-host.pp"
puppet.facter = {
"vagrant_puppet_run" => "true"
}
end
And then we make use of that fact for example like this:
$unbound_conf = $::vagrant_puppet_run ? {
'true' => 'puppet:///modules/unbound_dns/etc/unbound/unbound.conf.vagrant',
default => 'puppet:///modules/unbound_dns/etc/unbound/unbound.conf',
}
file { '/etc/unbound/unbound.conf':
owner => root,
group => root,
notify => Service['unbound'],
source => $unbound_conf,
}
Note that the fact is only available during puppet provision time.

Resources