I'm looking for the correct way to set chef-client's log level when creating a server using knife ec2.
My initial thought was setting the log level configuration in my knife.rb file like this:
log_level :debug
This didn't result in any visible change in the log level.
I also tried turning on the -VV option in knife ec2, but that just results in verbose output from knife ec2 itself.
When I run chef-client directly after logging onto the server, I can get debug information with no problems using:
sudo chef-client -l debug
That would be sufficient, but I'm investigating an underlying problem that only occurs on the initial server bootstrap.
Here is a simple hack/work-around that I discovered to control the log level. In knife.rb, include
a line like the following, for example:
chef_client_path 'chef-client -l debug'
You can see why this works by looking at
this line
of the Chef source code. I confirmed that this works in Chef 11.6.
It may work in other versions.
Currently the default bootstrap templates always set the default :auto log level to the node. There is at least one ticket that seems to be related.
So your only option now is to create your own bootstrap template that adds log_level :debug to /etc/chef/client.rb. You can copy and modify e.g. the default "chef-full" template and then pass it as a parameter to knife.
Related
Trying to google something for Goland vs Golang is proving to be quite hard. Everything I am searching seems to come back for code or switching profiles. That is all already handled.
I had a project that was taking in json and processing the data. I was able to use the run and debug button to build and debug my go code with the default configuration.
That changed I am pulling data files from S3 and that requires authentication to aws which we use aws-vault for.
The issue I am running into is in this configuration there is no additional settings. There is a checkbox to Run after build but no way for me to say Run with aws-vault
Now I have to uncheck Run after build and add the flag
-gcflags="-N -l" -o app
and then attach to that process with Shift + Option + fn + F5.
What I am looking for is being able to run aws-vault exec user -- go ... within the IDE so I do not have a build step, a run step and then manually attaching to the process.
Figured out at least what I feel is a better solution that allows you to run any code (including cli) that is using an AWS SDK.
I am on a mac so osascript works for me but the prompt can be whatever your os supports. Or if you have a Yubikey you can use prompt=ykman.
In ~/.aws there are 2 files config and credentials these tell the SDK how to auth.
To start in ~/.aws/config there is a profile for each role that is needed. Default is a role that you assume all the others are ones that the code would escalate to.
[default]
output=json
region=<your region>
mfa_serial=arn:aws:iam::<you>
[profile dev-base]
source_profile=default
role_arn=arn:aws:iam::<account to escalate to>
[profile staging-base]
source_profile = default
role_arn = arn:aws:iam::<account to escalate to>
[dev]
region = <your region>
[staging]
region = <your region>
Note: one oddity is that I had to put the role in this file with the region so that the role exists.
This may not be needed if you are not using java. You could put the full role in the previous file (but I also use java so this is my setup) in ~/.aws/credentials
[dev]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec dev-base -j --prompt=osascript
[staging]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec master-base -j --prompt=osascript
Note: An oddity here is that ca_bundle is specified. Something in golang was not happy with using the AWS_CA_BUNDLE and this appears to work.
Now when the code is ran a pop-up displays asking for an MFA token.
Also, when running any aws cli command you can use the --profile ie aws s3 ls --profile dev that you want to use and the pop-up will appear.
Editing these file manually when using aws-vault might not be the best way to do it but at the moment this is how we manage them and this seems to give the best workflow.
chef-client v12.15.19 (MSI installer)
On Windows Server 2012 R2
Instead of having to be in the directory where knife.rb is located or put knife.rb in one of the pre-determined locations where knife looks for that config, can I just pass it in as an argument?
Example: knife.bat node list -config_file c:\some\other\place\knife.rb
I'm just curious if this is possible because I didn't see this anywhere in the docs. I know I could workaround this with an environment variable and possibly other ways, but I just was wondering if there was an arg to pass the knife.rb directly.
Yes, you can specify a different configuration file. Here's a snip of the knife --help output from the latest Chef 12.x releases (at the time of this writing):
-c, --config CONFIG The configuration file to use
--config-option OPTION=VALUE Override a single configuration option
So you should be able to do knife -c .... I use this regularly to manage different knife configuration files for different hosted chef instances. I don't see it mentioned in any docs explicitly either.
Yes, you can specify but mind the parameters sequence.
knife cookbook upload my_test_cookbook -c ./knife.rb
You can specify options only after your main command is complete.
The same you can check in knife help :
Usage: knife sub-command (options)
I have chef configured to add "/etc/chef/ohai_plugins" to Ohai::Config[:plugin_path]. However, the Chef documentation says:
"The Ohai executable ignores settings in the client.rb file when Ohai is run independently of the chef-client."
So, how can I get a stand-alone run of ohai to load and use the plugins in that custom path?
(Background: I have a custom plugin that reports some information that we keep track of for a fleet of servers, like whether a server has been patched for heartbleed or shellshock. I want to be able to run "ssh somehost ohai", parse the JSON that gets sent back, and extract the information I need.)
Thanks.
Outside of chef, you can add an additional plugin path using the -d switch, e.g.
$ ohai -d /etc/chef/ohai_plugins
The relevant source code is at:
https://github.com/chef/ohai/blob/master/lib/ohai/application.rb#L25-L28
https://github.com/chef/ohai/blob/master/lib/ohai/application.rb#L78-L80
The option to specify a custom config file for Ohai was sadly removed last year with https://github.com/chef/ohai/commit/ebabd088673cf3e36d600bd96aeba004077842f1
Hope this answers your question.
This will be possible soon via the implementation of Chef RFC 53: https://github.com/chef/chef-rfc/blob/master/rfc053-ohai-config.md
I feel silly asking this question as it seems to work flawlessly for most people but I couldn't solve the following problem I encountered after setting up a Chef server 12 on RHEL 6 and the ChefDK 0.6.0 on my mac.
The chef server setup went through like charm as describeb on the documentation, no errors at all. When I wanted to use my machine as workstation to push cookbooks to the server I always get the error "The object you are looking for could not be found". According to other stackoverlfow posts (0, 1, 2) this is likely due to a configuration issue in knife.rb. Nevertheless, I used "knife configure" to setup the knife.rb file and double checked for any typos in the path. In addition, according to the knife.rb documentation page I used the attributes properly.
Anyone have an idea what could cause the problem?
log_level :info
log_location STDOUT
node_name "nodermatt"
client_key "/Users/odermatt/chef-repo/.chef/nodermatt.pem"
validation_client_name "Adobe-validator.pem"
validation_key "/Users/odermatt/chef-repo/.chef/Adobe-validator.pem"
chef_server_url "https://sj1010005158157.corp.adobe.com:443/organizations/Adobe"
syntax_check_cache_path "/Users/odermatt/chef-repo/.chef/syntax_check_cache"
cookbook_path [ "/Users/odermatt/chef-repo/cookbooks" ]
You need to do perform knife commands under .chef directory. I too had same problem for change I tried under .chef directory, it was working. Try "knife client check" and "knife cookbook upload yourcookbook".
In knife.rb file, give chef_server_url as with your orgs name means which org you need to upload the cookbook. Typically be like "https://api.chef.io/organizations/orgname" and give path to your cookbooks directory.
I had this same issue, could not run
knife node run_list add nodename 'recipe[cron-delvalidate::default]' - cron-delvalidator recipe to the node named: chefnode
I found out, after reading this that it's Chefnode, not chefnode, capital "C". This solved my issue.
The response was:
Chefnode:
run_list: recipe[cron-delvalidate::default]
I'm looking for some best practices on how to increase my productivity when writing new puppet modules. My workflow looks like this right now:
vagrant up
Make changes/fixes
vagrant provision
Find mistakes/errors, GOTO 2
After I get through all the mistakes/errors I do:
vagrant destroy
vagrant up
Make sure everything is working
commit my changes
This is too slow... how can i make this workflow faster?
I am in denial about writing tests for puppet. What are my other options?
cache your apt/yum repository on your host with the vagrant-cachier plugin
use profile –evaltrace to find where you loose time on full provisioning
use package base distribution :
eg: rvm install ruby-2.0.0 vs a pre-compiled ruby package created with fpm
avoid a "wget the internet and compile" approach
this will probably make your provisioning more reproducible and speedier.
don't code modules
try reusing some from the forge/github/...
note that it can be against my previous advice
if this is an option, upgrade your puppet/ruby version
iterate and prevent full provisioning
vagrant up
vagrant provision
modify manifest/modules
vagrant provision
modify manifest/modules
vagrant provision
vagrant destroy
vagrant up
launch server-spec
minimize typed command
launch command as you modify your files
you can perhaps setup guard to launch lint/test/spec/provision as you save
you can also send notifications from guest to host machine with vagrant-notify
test without actually provisioning in vagrant
rspec puppet (ideal when refactoring modules)
test your provisioning instead of manual checking
stop vagrant ssh-ing checking if service is running or a config has a given value
launch server-spec
take a look at Beaker
delegate running the test to your preferred ci server (jenkins, travis-ci,...)
if you are a bit fustrated by puppet... take a look at ansible
easy to setup (no ruby to install/compile)
you can select portion of stuff you want to run with tags
you can share the playbooks via synched folders and run ansible in the vagrant box locally (no librairian-puppet to launch)
update : after discussion with #garethr, take a look at his last presentation about guard.
I recommand using language-puppet. It comes with a command line tool (puppetresources) that can compute catalogs on your computer and let you examine them. It has a few useful features that can't be found in Puppet :
It is really fast (6 times faster on a single catalog, something like 50 times on many catalogs)
It tracks where each resource was defined, and what was the "class stack" at that point, which is really handy when you have duplicate resources
It automatically checks that the files you refer to exist
It is stricter than Puppet (breaks on undefined variables for example)
It let you print to standard output the content of any file, which is useful for developing complex templates
The only caveat is that it only works with "modern" Puppet practices. For example, require is not implemented. It also only works on Linux.