Puppet - how to pass arguments to the command line - ruby

I am newbie to puppet and I wonder how I can pass arguments to the command line. I will explain myself:
This is the command that I'm running (puppet apply):
C:>puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\site.pp
Site.pp:
File { backup => false }
node default {
include 'tn'
}
It means that I am running 'tn' which is one of the modules in my puppet project.
For example,
I have these modules in my puppet project:
tn
ps
av
So to run each module I need to go to this site.pp file and change it to
include 'ps'
or
include 'av'
My question is -
How do I pass these modules as arguments to the puppet apply command?
I know that I can create 3 .pp files that each one contains one module (ps, av, tn)
And then my command will look like:
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\ps.pp
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\av.pp
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\tn.pp
But, I think it's not a good solution..
Is there another way to pass these modules as arguments to the puppet apply?
If I didn't mention - each module is responsible for different actions.
thanks !!!

I know that I can create 3 .pp files that each one contains one module
(ps, av, tn)
[...]
But, I think it's not a good solution.
Why isn't it a good solution? It seems perfectly sensible to me that if you have three different things you want to be able to do, then you have a separate file to use to accomplish each.
Nevertheless, if your modules do not use each other, then you could probably accomplish what you describe by relying on tags. Have your site manifest include all three modules:
File { backup => false }
node default {
include 'tn'
include 'ps'
include 'av'
}
Then use the --tags option to select only one of those modules and all the other classes it brings in:
puppet apply --tags ps --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\site.pp

A pp file is a class file not a module, a module contains the classes and anything else needed to support/test those classes, take a look at https://puppet.com/docs/puppet/5.5/modules_fundamentals.html.
Look at how modules are laid out on https://forge.puppet.com/
It’s well worth looking at the PDK https://puppet.com/docs/pdk/1.x/pdk.html as it'll build a module for you, you just need to add the classes.
In your case you probably want to create a new module (let’s call it mymodule) and in that module put all your tn.pp ps.pp and av.pp class files under the C:\ProgramData\PuppetLabs\code\environments\test\modules\mymodule\manifests directory.
Then for local testing use the examples pattern, so in your module you’ll have an examples directory and in there you might have a file called ps.pp which would contain include mymodule::ps to include that ps.pp class file.
The aim of the examples directory is to give you a method of passing in parameters for local testing.
Back in your site.pp file you’d apply is with:
Node default {
Include mymodule::ps
}
So now you want to apply different classes to the nodes and there you hit the world of node classification and there are many ways you can do that. In your case I think you’re probably doing this on a small scale so you’d have;
Node psserver.example.com {
Include mymodule::ps
}
Node tnserver.example.com {
Include mymodule::tn
}
Have a look at some of the online training https://puppet.com/learning-training/kits/puppet-language-basics

Related

How to set dynamic variable once it is identified to all recipes in a cookbook

I need a help to set a global variable in chef recipes.
I have below series of recipes:
Discovers the tomcat from path variable/attibutes/default.rb:
default['tomcat_cookbook']['tomcathome']="['/home/tomcat','/home/ApacheTomcat']"
This recipe will identify the tomcat installation as either one of the directory will be available on server out of this two directories.
Lets say, if it sets the tomcathome to directory "/home/tomcat", I have some more subsequent recipes like start/stop/restart tomcat.
Currently for every recipe I am running discovery logic inside stop/start recipes while knowing that on a particular server, tomcathome is set to "/home/tomcat" .
Is there any way I can remove duplicate code for tomcat home discovery and make use of the identified tomcathome variable for remaining recipes.
Please suggest.
I think this would be a good use of libraries. I'll assume the cookbook name is tomcat_cookbook. In the libraries folder in a cookbook, create a file called path.rb.
Add the following code into the path.rb file. I prefer to namespace my libraries to organize my methods using CookbookName::ModuleName format.
libraries/path.rb:
module TomcatCookbook
module Path
def install_path
node['tomcat_cookbook']['tomcathome'].each do |path|
return path if ::Dir.exist?(path)
end
end
end
end
Within any recipe, you can include this module and use the methods in it:
# Use this include for use in the recipe
Chef::Recipe.send(:include, TomcatCookbook::Path)
# Use this include for using methods in the directory resource itself
Chef::Resource::Directory.send(:include, TomcatCookbook::Path)
Chef::Log.info("Install Path: #{install_path}")
directory "tomcat_install_path" do
path install_path
action :create
end
In certain situations, I have needed to create a common cookbook which includes only libraries which I can use across multiple cookbooks.

How to load attributes from a file to chefspec node

I have the following layout:
attributes/default.rb
recipes/my_recipe.rb
spec/unit/recipes/my_recipe_spec.rb
In the attributes files I have a lot of common settings likes
default['framework']['folder']['lib'] = '/usr/lib/fwrk'
I would like to use them in my chefspec, like
it 'install the lib if there are changes' do
lib_path = chef_run.node['framework']['folder']['lib']
puts(lib_path)
end
How can I include this file to my node from SoloRunner/ServerRunner?
Run the .converge() first and you'll see them in there. But remember that you almost ever parameterize your tests on the same inputs on both sides, that wouldn't be a useful test since it doesn't check if the value is what you expected it to be.

How to write a BitBake driver recipe which requires kernel source header files?

Introduction
I have a do_install task in a BitBake recipe which I've written for a driver where I execute a custom install script. The task fails because the installation script cannot find kernel source header files within <the image rootfs>/usr/src/kernel. This script runs fine on the generated OS.
What's Happening
Here's the relevant part of my recipe:
SRC_URI += "file://${TOPDIR}/example"
DEPENDS += " virtual/kernel linux-libc-headers "
do_install () {
( cd ${TOPDIR}/example/Install ; ./install )
}
Here's a relevant portion of the install script:
if [ ! -d "/usr/src/kernel/include" ]; then
echo ERROR: Linux kernel source include directory not found.
exit 1
fi
cd /usr/src/kernel
make scripts
...
./install_drv pci ${DRV_ARGS}
I checked changing to if [ ! -d "/usr/src/kernel" ], which also failed. install passes different options to install_drv, which I have a relevant portion of below:
cd ${DRV_PATH}/pci
make NO_SYSFS=${ARG_NO_SYSFS} NO_INSTALL=${ARG_NO_INSTALL} ${ARGS_HWINT}
if [ ${ARG_NO_INSTALL} == 0 ]; then
if [ `/sbin/lsmod | grep -ci "uceipci"` -eq 1 ]; then
./unload_pci
fi
./load_pci DEBUG=${ARG_DEBUG}
fi
The make target build: within ${DRV_PATH}/pci is essentially this:
make -C /usr/src/kernel SUBDIRS=${PWD} modules
My Research
I found these comments within linux-libc-headers.inc relevant:
# You're probably looking here thinking you need to create some new copy
# of linux-libc-headers since you have your own custom kernel. To put
# this simply, you DO NOT.
#
# Why? These headers are used to build the libc. If you customise the
# headers you are customising the libc and the libc becomes machine
# specific. Most people do not add custom libc extensions to the kernel
# and have a machine specific libc.
#
# But you have some kernel headers you need for some driver? That is fine
# but get them from STAGING_KERNEL_DIR where the kernel installs itself.
# This will make the package using them machine specific but this is much
# better than having a machine specific C library. This does mean your
# recipe needs a DEPENDS += "virtual/kernel" but again, that is fine and
# makes total sense.
#
# There can also be a case where your kernel extremely old and you want
# an older libc ABI for that old kernel. The headers installed by this
# recipe should still be a standard mainline kernel, not your own custom
# one.
I'm a bit unclear if I can 'get' the headers from the STAGING_KERNEL_DIR properly since I'm not using make.
Within kernel.bbclass provided in the meta/classes directory, there is this variable assigment:
# Define where the kernel headers are installed on the target as well as where
# they are staged.
KERNEL_SRC_PATH = "/usr/src/kernel"
This path is then packaged later within that .bbclass file here:
PACKAGES = "kernel kernel-base kernel-vmlinux kernel-image kernel-dev kernel-modules"
...
FILES_kernel-dev = "/boot/System.map* /boot/Module.symvers* /boot/config* ${KERNEL_SRC_PATH} /lib/modules/${KERNEL_VERSION}/build"
Update (1/21):
A suggestion on the yocto IRC channel was to use the following line:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
which is corroborated by the Yocto Project Reference Manual, which states that in version 1.8, there was the following change:
The kernel build process was changed to place the source in a common shared work area and to place build artifacts separately in the source code tree. In theory, migration paths have been provided for most common usages in kernel recipes but this might not work in all cases. In particular, users need to ensure that ${S} (source files) and ${B} (build artifacts) are used correctly in functions such as do_configure and do_install. For kernel recipes that do not inherit from kernel-yocto or include linux-yocto.inc, you might wish to refer to the linux.inc file in the meta-oe layer for the kinds of changes you need to make. For reference, here is the commit where the linux.inc file in meta-oewas updated.
Recipes that rely on the kernel source code and do not inherit the module classes might need to add explicit dependencies on the do_shared_workdir kernel task, for example:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
But I'm having difficulties applying this to my recipe. From what I understand, I should be able to change the above line to:
do_install[depends] += "virtual/kernel:do_shared_workdir"
Which would mean that the do_install task now must be run after do_shared_workdir task of the virtual/kernel recipe, which means that I should be able to work with the shared workdir (see Question 3 below), but I still have the same missing kernel header issue.
My Questions
I'm using a custom linux kernel (v3.14) from git.kernel.org. which inherits the kernel class. Here are some of my questions:
Shouldn't the package kernel-dev be a part of any recipe which inherits the kernel class? (this section of the variables glossary)
If I add the virtual/kernel to the DEPENDS variable, wouldn't that mean that the kernel-dev would be brought in?
If kernel-dev is part of the dependencies of my recipe, wouldn't I be able to point to the /usr/src/kernel directory from my recipe? According to this reply on the Yocto mailing list, I think I should.
How can I properly reference the kernel source header files, preferably without changing the installation script?
Consider your Environment
Remember that there are different environments within the the build time environment, consisting of:
sysroots
in the case of kernels, a shared work directory
target packages
kernel-dev is a target package, which you'd install into the rootfs of the target system for certain things like kernel symbol maps which are needed by profiling tools like perf/oprofile. It is not present at build time although some of its contents are available in the sysroots or shared workdir.
Point to the Correct Directories
Your do_install runs at build time so this is within the build directory structures of the build system, not the target one. In particular, /usr/src/ won't be correct, it would need to be some path within your build directory. The virtual/kernel do_shared_workdir task populates ${STAGING_DIR_KERNEL} so you would want to change to that directory in your script.
Adding a Task Dependency
The:
do_install[depends] += "virtual/kernel:do_shared_workdir
dependency like looks correct for your use case, assuming nothing in do_configure or do_compile accesses the data there.
Reconsider the module BitBake class
The other answers are correct in the recommendation to look at module.bbclass, since this illustrates how common kernel modules can be built. If you want to use custom functions or make commands, this is fine, you can just override them. If you really don't want to use that class, I would suggest taking inspiration from it though.
Task Dependencies
Adding virtual/kernel to DEPENDS means virtual/kernel:do_populate_sysroot must run before our do_configure task. Since you need a dependency for do_shared_workdir here, a DEPENDS on virtual/kernel is not enough.
Answer to Question 3
The kernel-dev package would be built, however it would then need to be installed into your target image and used at runtime on a real target. You need this at build time so kernel-dev is not appropriate.
Other Suggestions
You'd likely want the kernel-devsrc package for what you're doing, not the kernel-dev package.
I don't think anyone can properly answer that last question here. You are using a non-standard install method: we can't know how to interact with it...
That said, take a look at what meta/classes/module.bbclass does. It sets several related variables for make: KERNEL_SRC=${STAGING_KERNEL_DIR}, KERNEL_PATH=${STAGING_KERNEL_DIR}, O=${STAGING_KERNEL_BUILDDIR}. Maybe your installer supports some of these environment variables and you could set them in your recipe?

puppet apply not working

I am trying to puppet locally in my mac(OS X). I installed latest versions of puppet, hiera and facter. I created a module with the following structure
$ find .
.
./files
./manifests
./manifests/init.pp
./templates
and contents of hello_world/manifests/init.pp
$ cat manifests/init.pp
class hello_world {
file {'/tmp/itworks':
ensure => directory,
}
}
but nothing happens when I run puppet apply hello_world/manifests/init.pp
You define a class but never include it. (The class does not get declared.)
Note that modules are not usually applied directly. Instead, you apply a manifest that includes a class from the module (often, the class that is named after the module and automagically located in module_name/manifests/init.pp. E.g.
puppet apply -e 'include hello_world'
Note that the hello_world/ directory must be located in your $modulepath (usually /etc/puppet/modules for the open source variant.
You can try :
puppet apply -e 'include hello_world'
or for a dry run
puppet apply -e 'include hello_world' --noop
For more puppet apply, see manual page : http://docs.puppetlabs.com/man/apply.html

List all the declared packages in chef

I'm working on a infrastructure where some servers don't have access to the internet, so I have to push the packages to the local repo before declaring them to be installed on Chef.
However we've been on a situation where Chef failed to install a package since the package wasn't there on some boxes and it has been successful on some other boxes.
What I want to do is to run a Ruby/RSpec test before applying Chef config on the nodes to make sure the packages declared on the recipes do actually exist on the repo.
In order to do that I need to be able to list all the packages exists in the our recipes.
My question is: Is there anyway to list all the declared packages in Chef? I had a quick look at Chef::Platform and ChefSpec but unfortunately couldn't find anything useful to my problem.
Do you have any idea where is the best place to look at?
If you use ChefSpec you can find all the packages by calling chef_run.find_resources(:package) inside some test. See the source code. Like this:
require 'chefspec'
describe 'example::default' do
let(:chef_run) { ChefSpec::Runner.new.converge(described_recipe) }
it 'does something' do
chef_run.find_resources(:package)...
end
end
You could install one or more of the community ohai plugins. For example the following will return information about installed sofware:
debian
Redhat
windows
Once the plugins are enabled they will add additional node attributes that will be searchable from chef-server.

Resources