How do I parse the following yaml from Cassandra.yaml in Ruby (InSpec) profile to get the seeds value. I would to get all the 3 Ipaddress in one sting or the 3 IP addresses in 3 strings.
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "10.0.0.1, 10.0.0.2, 10.0.0.3"
Maybe there are better ways, but this would work:
require 'yaml'
config = YAML.load_file("/path/cassandra.yml")[0]
config.dig("parameters").first['seeds']
# => "10.0.0.1, 10.0.0.2, 10.0.0.3"
You could try the file resource or yaml resource in InSpec.
Related
When I run puppet agent --test I have no errors output but the user did not create.
My puppet hira.yaml configuration is:
---
version: 5
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
users.yaml is:
accounts::user:
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
I use this module
Nothing in Hiera data itself causes anything to be applied to target nodes. Some kind of declaration is required in a manifest somewhere or in the output of an external node classifier script. Moreover, the puppetlabs/accounts module provides only defined types, not classes. You can store defined-type data in Hiera and read it back, but automated parameter binding via Hiera applies only to classes, not defined types.
In short, then, no user is created (and no error is reported) because no relevant resources are declared into the target node's catalog. You haven't given Puppet anything to do.
If you want to apply the stored user data presented to your nodes, you would want something along these lines:
$user_data = lookup('accounts::user', Hash[String,Hash], 'hash', {})
$user_data.each |$user,$props| {
accounts::user { $user: * => $props }
}
That would go into the node block matched to your target node, or, better, into a class that is declared by that node block or an equivalent. It's fairly complicated for so few lines, but in brief:
the lookup function looks up key 'accounts::user' in your Hiera data
performing a hash merge of results appearing at different levels of the hierarchy
expecting the result to be a hash with string keys and hash values
and defaulting to an empty hash if no results are found;
the mappings in the result hash are iterated, and for each one, an instance of the accounts::user defined type is declared
using the (outer) hash key as the user name,
and the value associated with that key as a mapping from parameter names to parameter values.
There are a few problems here.
You are missing a line in your hiera.yaml namely the defaults key. It should be:
---
version: 5
defaults: ## add this line
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
I detected that using the puppet-syntax gem (included if you use PDK, which is recommended):
▶ bundle exec rake validate
Syntax OK
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml
ERROR: Failed to parse hiera.yaml: (hiera.yaml): mapping values are not allowed in this context at line 3 column 10
Also, in addition to what John mentioned, the simplest class to read in your data would be this:
class test (Hash[String,Hash] $users) {
create_resources(accounts::user, $users)
}
Or if you want to avoid using create_resources*:
class test (Hash[String,Hash] $users) {
$users.each |$user,$props| {
accounts::user { $user: * => $props }
}
}
Note that I have relied on the Automatic Parameter Lookup feature for that. See the link below.
Then, in your Hiera data, you would have a key named test::users to correspond (class name "test", key name "users"):
---
test::users: ## Note that this line changed.
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
Use of automatic parameter lookup is generally the more idiomatic way of writing Puppet code compared to calling the lookup function explicitly.
For more info:
PDK
Automatic Parameter Lookup
create_resources
(*Note that create_resources is "controversial". Many in the Puppet community prefer not to use it.)
I am looking for a way to dynamically set the key using the path of the file below.
For example if I have this YAML:
prospectors.config:
- fields:
queue_name: <somehow get the globbed string below in here>
paths:
- /var/log/casino/*.log
type: log
output.redis:
hosts:
- "producer:6379"
key: "%{[fields.queue_name]}"
And then I had a file called /var/log/casino/test.log, then key would become test.
Im not sure that what you want is possible.
You could use the source field and configure your Redis output using that as the key:
output.redis:
hosts:
- "producer:6379"
key: "%{source}"
This would have the disadvantage of being the absolute path of the source file, not the basename as your question asks for.
If you have a small number of possible basename patterns, and want a queue for each. For example, you have files:
/common/path/test-1.log
/common/path/foo-0.log
/common/path/01-bar.log
/common/path/test-3.log
...
and wanted to have three queues in redis test, foo and bar you could use the source field and the conditionals available in the keys configuration of redis output something like this
output.redis:
hosts:
- "producer:6379"
key: "default_key"
keys:
- key: "test_key"
when.contains:
source: "test"
- key: "foo_key"
when.contains:
source: "foo"
- key: "bar_key"
when.contains:
source: "bar"
I have an application where I process a lot of IP addresses (analysing Checkpoint firewall rule sets). At one point I want to check if a particular address object is a /32 or a 'network'.
Currently I am doing it like this:
next unless ip.inspect.match(/\/255\.255\.255\.255/)
it works but seems a bit inefficient but I can't see any method that extracts the mask from the address object.
Some parts of the Ruby core library are sometimes just sketched in, and IPAddr appears to be one of those that is, unfortunately, a little bit incomplete.
Not to worry. You can fix this with a simple monkey-patch:
class IPAddr
def cidr_mask
case (#family)
when Socket::AF_INET
32 - Math.log2((1<<32) - #mask_addr).to_i
when Socket::AF_INET6
128 - Math.log2((1<<128) - #mask_addr).to_i
else
raise AddressFamilyError, "unsupported address family"
end
end
end
That should handle IPv4 and IPv6 addresses:
IPAddr.new('151.101.65.69').cidr_mask
# => 32
IPAddr.new('151.101.65.69/26').cidr_mask
# => 26
IPAddr.new('151.101.65.69/255.255.255.0').cidr_mask
# => 24
IPAddr.new('2607:f8b0:4006:800::200e').cidr_mask
# => 128
IPAddr.new('2607:f8b0:4006:800::200e/100').cidr_mask
# => 100
It's not necessarily the best solution here, but it works.
I'm aware that this is a 3 year old question, but this was the first result on Google for me when I searched it, so I want to provide a new answer.
I was playing around in console today and noticed the the prefix method on the IPAddr object returns cidr mask as an integer.
So, for example:
ip = IPAddr.new("192.168.1.0/24")
ip.prefix
# => 24
It also turns out that type coercion gives you the integer representation of the address and mask, so you could potentially do the math on the output of to_i, to_json, as_json or instance_values.
An example with the network address:
ip.to_i
# => 3232235776
ip.to_i.to_s(2)
# => "11000000101010000000000100000000"
And one with the netmask:
ip.as_json
# => {"family"=>2, "addr"=>3232235776, "mask_addr"=>4294967040}
ip.as_json["mask_addr"].to_s(2)
# => 11111111111111111111111100000000
ip.as_json["mask_addr"].to_s(2).count("1")
# => 24
I have defined a mapping in yaml that looks like:
default: &DEFAULT
bucket: &bucket default_path
# Make sure that the second parameter of join doesn't start with a /
# otherwise it is interpreted as an absolute path and join won't work
path1: !!python/object/apply:os.path.join [*bucket, work_area/test1]
path2: !!python/object/apply:os.path.join [*bucket, work_area/test2]
I need to define more keys where the only value to be overwritten is bucket, sth like:
production:
<<: *DEFAULT
bucket: "s3://production-bucket"
but I still get
conf['production']['path1'] => 'default_path/work_area/test1'
instead of
conf['production']['path1'] => 's3://production-bucket/work_area/test1'.
Is there any way to do this in yaml?
As obvious from the syntax, I use pyyaml to parse the file.
YAML interpreters should take the most recent definition of an anchor:
An alias node is denoted by the “*” indicator. The alias refers to the most recent preceding node having the same anchor. It is an error for an alias node to use an anchor that does not previously occur in the document. It is not an error to specify an anchor that is not used by any alias node.
So even if PyYAML (3.10/3.11) would not throw a ComposerError if you try to parse:
default: &DEFAULT
bucket: &bucket default_path
# Make sure that the second parameter of join doesn't start with a /
# otherwise it is interpreted as an absolute path and join won't work
path1: !!python/object/apply:os.path.join [*bucket, work_area/test1]
path2: !!python/object/apply:os.path.join [*bucket, work_area/test2]
production:
<<: *DEFAULT
bucket: &bucket "s3://production-bucket"
inserting the path1 and path2 keys with <<: *DEFAULT* would give you their expanded versions with default_path as that is the definition available to the parser when reading [*bucket, work_area/test1]
The "expansion" of the alias is done as soon as the alias is read in from the YAML source, not at some point at the end of the file, when all anchored data has been read in.
In you updated example, there is no other anchor bucket defined than the one for the scalar "default_path". You are confusing yourself by using the same name for the anchor and the keys (bucket), but the key names are completely irrelevant for resolving the alias *bucket.
If you can rearrange your YAML you might get something acceptable to your use case by doing ¹:
import ruamel.yaml
yaml_str = """\
default: &DEFAULT
bucket: &klm default_path
production:
&klm "s3://production-bucket"
result:
<<: *DEFAULT
# Make sure that the second parameter of join doesn't start with a /
# otherwise it is interpreted as an absolute path and join won't work
path1: !!python/object/apply:os.path.join [*klm, work_area/test1]
path2: !!python/object/apply:os.path.join [*klm, work_area/test2]
"""
conf = ruamel.yaml.load(yaml_str)
print(conf['result']['path1'])
which will give you:
s3://production-bucket/work_area/test1
¹ This was done using ruamel.yaml of which I am the author.
I'm trying to introduce a many-to-one kind of mapping inside a YAML configuration file for rake.
That is, I have something like:
- server: address
and I'd like to have something like:
- server: {1, 3, 5: address1; 2, 8, 12: address2}
of course, this is not the correct syntax.
This because I need a different address according to a given ID.
CONFIG['server'][3] # this should return 'address1'
CONFIG['server'][5] # this should return 'address1' too
CONFIG['server'][12] # and this should return 'address2'
Is this feasibile in some way?
Thank you in advance
It should work this way:
create a file in config called server_config.yml:
common: &common
common_stuff_foo: foo
common_stuff_bar: bar
server:
1:
<<: *common
adress: adress_for_server1
2:
<<: *common
adress: adress_for_server2
... #some other servers
12:
<<: *common
adress: adress_for_server12
put a file to config/initializers like config_servers.rb with the content
CONFIG = YAML.load_file("#{RAILS_ROOT}/config/server_config.yml")
and you might get your address via
CONFIG['server'][1]['address'] in your application
It's not tested, but I think it will work. I'm just a little bit uncertain about those numbers in the yaml-file