In a chef recipe I have the following code:
if (node['server1']['PT1'] == true)
setup('PT1')
elsif (node['server1']['PT2'] == true)
setup('PT2')
end
I am checking my attributes to see if the value equals true for either PT1 or PT2. This works correctly if i hardcode server1 into the code but I want to know do it dynamically depending on the server running this. How would I replace node['server1'] with something like node.name to find different servers in the attribute file. An example of my attributes is:
default['server1'][...]...
default['server2'][...]...
default['server3'][...]...
default['server4'][...]...
If I can dynamically look at the different servers, that'd be the ideal result.
Depends on your naming convention. It doesn't look like ohai gathers the node name information automatically but it does gather up a lot of information.
If you have a standard around your node names, such as using their hostname or fqdn as the node name, then you can simply query for that.
node['hostname']...
node['fqdn']...
If you use a more esoteric method to name your nodes that have nothing to do with your host information you can still query the client.rb located on your node, which is how your node knows what to identify itself as to the Chef server. On windows it's located at C:/chef/client.rb, on UNIX its at /etc/chef/client.rb. I'll leave the parsing of the file up to you.
To view the full extent of what ohai (everything available under node) log onto a bootstrapped machine and type ohai into your shell. Its quite a bit so you might want to output to a text file and use an editor to scroll/search through it.
EDIT1:
In test kitchen the location changes. It becomes <your kitchen cache location>\client.rb> EX, if you use vagrant with windows and its defaults it becomes c:\users\vagrant\appdata\local\temp\kitchen\client.rb
EDIT2:
To bring it back to your original example, if the contents of your node['server'] can either be PT1 or PT2 then you can do the following
setup(node['server'])
and you can control the contents of what server is through any variety of mechanisms. If you are controlling it through hostname then you could do
attributes/default.rb
...
node['server']= node['hostname']
or more simply, if your standards are such that allow for it
recipes/default.rb
...
setup(node['hostname'])
Although normally you'd control what is being setup in separate recipes defined in your runlist.
You can make this totally dynamic even:
node['whatever'][node.name].each do |key, value|
setup(key) if value == true
end
Related
We have an issue where we don't have admin privileges to tag servers simply with knife. How would I tell Chef to read the template and if the template includes the node.name of the server to tag it.
I know I can tag servers with tag('tagnamehere') But the code surrounding that... I don't know if that will work. Or if "Template.readlines" is a search function instead of "File.readlines".
if Template.readlines('template1.erb').grep(/#{node.name}/).any?
tag('mytag')
end
Not sure how to accomplish this feat. But trying very hard to understand as an Ops person.
If I understand it correctly reading the static template erb file and you are looking for #{node.name} variable used or not.
In this case solution would be skipping string interpolation by using \ in grep
if Template.readlines('template1.erb').grep(/\#{node.name}/).any?
tag('mytag')
end
I'm running an SSH daemon in a container with Docker. As the latter is managed by systemd and the sshd logs to stdout, the relevant data to detect attackers appears in systemd's journal, but its entries have an extra prefix like this:
Feb 13 21:51:25 my.example.com dockerd[427]: Feb 13 18:51:25 sshd[555]: Invalid user ts3bot from 180.166.17.122 port 43474
The jail is configured with this snippet:
[sshd]
enabled = true
mode = aggressive
filter = sshd[mode=%(mode)s]
port = ssh
It seems that this line from filters.d/sshd.conf contains what I want to change:
journalmatch = _SYSTEMD_UNIT=sshd.service + _COMM=sshd
But I can't find any helpful documentation on journalmatch's configuration. I'm using fail2ban 0.10.
Can someone explain how the part on the right of the equal sign is to be interpreted ?
When I hopefully will figure out how to adjust that value, should I edit the filters.d/sshd.conf directly (it's provided from an Arch package) or somewhere else?
To preserve the option having an extra sshd jail for the host system itself, here's what I would do:
Version – Use fail2ban version >= 0.9 that supports use of systemd as backend. (BTW: Version 0.11 is pretty new and might not be stable yet, but I like the new feature to automatically increase ban times for each new match from the same IP.)
Jail – Create a separate jail jail.d/sshd-docker. Adopt settings from original sshd jail as needed. Maybe start low ban times for safety first and increase later. Add backend = systemd to that new sshd-docker jail. Could look like this:
[sshd-docker]
enabled = true
filter = sshd-docker
action = iptables
backend = systemd
maxretry = 5
findtime = 1d
bantime = 2w
Filter – I prefer to leave filter files and original jail.conf file untouched so I can easily upgrade to newer fail2ban versions. Therefore I would suggest to duplicate the filter file filter.d/sshd.conf to filter.d/sshd-docker.conf and refer to that new filter in your sshd-docker jail (as seen above).
Filter/regex – Adopt regex in filter.d/sshd-docker.conf to match your log entries. Could be as simple as changing this
_daemon = sshd
to
_daemon = docker
as the _daemon directive is used to construct the __prefix_line regex as you can see in filter.d/common.conf.
Filter/journalmatch – As far as I can see from fail2ban-regex man page the journalmatch directive overrides other filters. Therefore you might also need to change this line in your filter.d/sshd-docker.conf
journalmatch = _SYSTEMD_UNIT=sshd.service + _COMM=sshd
to
journalmatch =
(In fail2ban 0.11 you could also just remove this line. Not sure when prior versions stopped to require a journalmatch = entry in a filter file.)
Test – Reload fail2ban and check how it works.
I configm that creating a new filter with jourmalmatch = and _daemon = docker/container_name (that's how i'm logging in syslog) fixed the issue for me.
in theory a new filter should not be needed, as everything in the filter can be overwritten in the jail according to the man page jail.conf. however, for me it did not work. _daemon overwrite was not taken into account in jail.local.
the only way to have it working for me was to copy sshd.conf into docker_sshd.conf and update _daemon and journalmatch fields.
While implementing a templated config file using chef 11.x I'd like to insert the current date/time into the file whenever it is updated.
For example:
# Created By : core::time-settings
# On : <%= Time.now %>
Obviously this evaluates on each recipe run and constantly updates the target file even when the other attributes are OK - which is not desired.
Therefore is anyone aware of a solution? I'm not aware of any built-in logic within Chef to achieve this and I don't know of a built-in chef variable that I could evaluate within a ruby block that would only be true if the other attributes are out of compliance (as that would provide a potential workaround).
I know that I could run an execute type operation which only gets run after the template resource has been fired, and it expands a variable in the file to achieve this, but I don't like the concept or idea of doing that.
Thanks,
While I agree with Tensibai that what you expect is not what Chef is made for.
What I want to add (because some time ago I searched pretty long for that) is how to include the current time stamp in a file, once it was modified through Chef (somehow you have to circumvent that it always updates the time stamp).
The result can be found here, simplified, untested version:
time = Time.new.strftime("%Y%m%d%H%M%S")
template "/tmp/example.txt" do
source "/tmp/example.txt.erb" # the source is not in a cookbook, but on the disk of the node!
local true
variables(
:time => time
)
action :nothing
end
template "/tmp/example.txt.erb" do
variables(
variable1 => "test"
)
notifies :create, resources(:template => "/tmp/example.txt"), :immediately
end
Everytime, when the content of /tmp/example.txt.erb changes, it triggers /tmp/example.txt to be written - taking /tmp/example.txt.erb as template from the local disk instead of from the cookbook (because local true) and replacing the time variable with the current time.
So the only variable that has to be replaced when writing /tmp/example.txt is the time, thus the example.txt.erb template looks like this:
# my template
time: <%%= #time %>
other stuff here.. <%= #variable1 %>
That's the way chef works, it makes a diff between rendered template and actual file, as the timestamp is not the same it replace it.
Your alternate solution won't work either for the same reason, a placeholder will be different than the datetime of the replacements.
The best you can do is write a file aside named 'myfile-last-update' for exemple with a text inside describing the last update of it.
But last question: Why would you want to have the time inside the file as it's already present in the file attribute (ls -l should give you this information) ?
I created a Chef cookbook with attributes, then tried to boostrap a code to node and pass additional attributes in addition and/or override the defaults.
Is it possible to print an attribute tree to see what attributes are loaded, and which are overridden?
To get the entire attribute tree from inside a converged Chef, as opposed to via knife from Chef Server, which is useless in a solo environment, in a useful form look at node.to_hash. More information is in "Chef::Node".
To get a pretty printed log you can use Chef's JSON library's pretty printer:
output="#{Chef::JSONCompat.to_json_pretty(node.to_hash)}"
log output
or write a file local to your client:
output="#{Chef::JSONCompat.to_json_pretty(node.to_hash)}"
file '/tmp/node.json' do
content output
end
Note that this is the converged node, so you won't get the default/override/etc. levels you can get with node.debug_value, but if you don't actually know the name/path of the attribute, or you need to loop over a number of attributes, this could be your friend.
You'll get a huge result that looks like this highly trimmed example:
{
"chef_type": "node",
"name": "node.example.com",
"chef_environment": "_default",
"build-essential": {
"compile_time": false
},
"homebrew": {
"owner": null,
"auto-update": true,
...
},
"recipe": [
"example"
],
"run_list": [
"recipe[example]"
]
}
"How do you create pretty json in CHEF (ruby)" had the pretty printer pointer.
You can use node.debug_value to show a single attribute. This will print out the value for that attribute at each level. However, doing this at each level for each attribute is harder (I'm not sure of a way to do it). Furthermore, because of the massive volume of attributes from ohai, I'm not sure you'd even want to do it.
If your chef run is completing correctly, you can do a knife node show -l <nodename> (that's a lower case L). That will show you the actual value, but it provides a huge volume of data, and doesn't tell you which values are default, normal, override, etc.
Forking the answer by #keen, this produces a more human readable output in YAML format.
output = node.to_yaml
file '/var/node.yaml' do
content output
end
At times, it might be easy to read the variables off a node, after they are provisioned.
knife node edit <hostname-here> -a
I have a source file that contains two fields: IP_ADDRESS and USER_NAME. I want to check whether the IP address is correct or not before loading it to the datawarehouse using DATASTAGE. How do I do this?
I was browsing Stack Overflow and think I might have a solution to your question. If you create a job to grab all of the IP_ADDRESS's from the file and send them to a BASIC transformer (search for BASIC transformer in DataStage. It is NOT the one that is normally on the palette). From there, set the Stage Variables as 'SetUserStatus() and write out the column name to a peek stage (You don't need the output at all. The SetUserStatus is the important part). This will now allow you to pass up the Command Output (list of IP Addresses) to a Sequence. From the Sequence, start with the job you just created (BASIC transformer job) and link that to a User Variables Activity. In the User Variables Activity stage, Set the name to something like 'IP Address' and Expression as IP_ADDRESS.$UserStatus. You can then use a Loop to take that output that is now a List and send each individual IP Address to an Execute Stage with a Ping command to see if it returns a valid IP Address. If it does return a valid IP then have your job that writes the USER_NAME and IP_ADDRESS to do a 'Select' statement where the IP_ADDRESS = the valid IP_ADDRESS. For the ones that aren't valid, you can send them down a different path and have them write out to '.txt' file somewhere so you know which ones weren't valid. I'm sure you will need a few more steps in there but that should be the gist of it.
Hope my quick stab at your issue helps.
Yes, you can use a transformer or a transformer and a filter to do that, depending on the version of Datastage you're using. If you're using PX, just encode the validation logic in a transformer stage, and then, on the output link set up a filter that doesn't allow the rows to pass forward if they didn't pass the validation logic.