I am trying to create a custom fact I can use as the value for a class parameter in a hiera yaml file.
I am using the openstack/puppet-keystone module and I want to use fernet-keys.
According to the comments in the module I can use this parameter.
# [*fernet_keys*]
# (Optional) Hash of Keystone fernet keys
# If you enable this parameter, make sure enable_fernet_setup is set to True.
# Example of valid value:
# fernet_keys:
# /etc/keystone/fernet-keys/0:
# content: c_aJfy6At9y-toNS9SF1NQMTSkSzQ-OBYeYulTqKsWU=
# /etc/keystone/fernet-keys/1:
# content: zx0hNG7CStxFz5KXZRsf7sE4lju0dLYvXdGDIKGcd7k=
# Puppet will create a file per key in $fernet_key_repository.
# Note: defaults to false so keystone-manage fernet_setup will be executed.
# Otherwise Puppet will manage keys with File resource.
# Defaults to false
So wrote this custom fact ...
[root#puppetmaster modules]# cat keystone_fernet/lib/facter/fernet_keys.rb
Facter.add(:fernet_keys) do
setcode do
fernet_keys = {}
puts ( 'Debug keyrepo is /etc/keystone/fernet-keys' )
Dir.glob('/etc/keystone/fernet-keys/*').each do |fernet_file|
data = File.read(fernet_file)
if data
content = {}
puts ( "Debug Key file #{fernet_file} contains #{data}" )
fernet_keys[fernet_file] = { 'content' => data }
end
end
fernet_keys
end
end
Then in my keystone.yaml file I have this line:
keystone::fernet_keys: '%{::fernet_keys}'
But when I run puppet agent -t on my node I get this error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, "{\"/etc/keystone/fernet-keys/1\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}, \"/etc/keystone/fernet-keys/0\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}}" is not a Hash. It looks to be a String at /etc/puppetlabs/code/environments/production/modules/keystone/manifests/init.pp:1144:7 on node mgmt-01
I had assumed that I had formatted the hash correctly because facter -p fernet_keys output this on the agent:
{
/etc/keystone/fernet-keys/1 => {
content => "xxxxxxxxxxxxxxxxxxxx="
},
/etc/keystone/fernet-keys/0 => {
content => "xxxxxxxxxxxxxxxxxxxx="
}
}
The code in the keystone module looks like this (with line numbers)
1142
1143 if $fernet_keys {
1144 validate_hash($fernet_keys)
1145 create_resources('file', $fernet_keys, {
1146 'owner' => $keystone_user,
1147 'group' => $keystone_group,
1148 'subscribe' => 'Anchor[keystone::install::end]',
1149 }
1150 )
1151 } else {
Puppet does not necessarily think your fact value is a string -- it might do, if the client is set to stringify facts, but that's actually beside the point. The bottom line is that Hiera interpolation tokens don't work the way you think. Specifically:
Hiera can interpolate values of any of Puppet’s data types, but the
value will be converted to a string.
(Emphasis added.)
Related
I have a yaml file (also used in a azure devops pipeline so needs to be in this format) which contains some settings I'd like to directly access from my terraform module.
The file looks something like:
variables:
- name: tenantsList
value: tenanta,tenantb
- name: unitName
value: canary
I'd like to have a module like this to access the settings but I can't see how to get to the bottom level:
locals {
settings = yamldecode(file("../settings.yml"))
}
module "infra" {
source = "../../../infra/terraform/"
unitname = local.settings.variables.unitName
}
But the terraform plan errors with this:
Error: Unsupported attribute
on canary.tf line 16, in module "infra":
16: unitname = local.settings.variables.unitName
|----------------
| local.settings.variables is tuple with 2 elements
This value does not have any attributes.
It seems like the main reason this is difficult is because this YAML file is representing what is logically a single map but is physically represented as a YAML list of maps.
When reading data from a separate file like this, I like to write an explicit expression to normalize it and optionally transform it for more convenient use in the rest of the Terraform module. In this case, it seems like having variables as a map would be the most useful representation as a Terraform value, so we can write a transformation expression like this:
locals {
raw_settings = yamldecode(file("${path.module}/../settings.yml"))
settings = {
variables = tomap({
for v in local.raw_settings.variables : v.name => v.value
})
}
}
The above uses a for expression to project the list of maps into a single map using the name values as the keys.
With the list of maps converted to a single map, you can then access it the way you originally tried:
module "infra" {
source = "../../../infra/terraform/"
unitname = local.settings.variables.unitName
}
If you were to output the transformed value of local.settings as YAML, it would look something like this, which is why accessing the map elements directly is now possible:
variables:
tenantsList: tenanta,tenantb
unitName: canary
This will work only if all of the name strings in your input are unique, because otherwise there would not be a unique map key for each element.
(Writing a normalization expression like this also doubles as some implicit validation for the shape of that YAML file: if variables were not a list or if the values were not all of the same type then Terraform would raise a type error evaluating that expression. Even if no transformation is required, I like to write out this sort of expression anyway because it serves as some documentation for what shape the YAML file is expected to have, rather than having to study all of the references to it throughout the rest of the configuration.)
With my multidecoder for YAML and JSON you are able to access multiple YAML and/or JSON files with their relative paths in one step.
Documentations can be found here:
Terraform Registry -
https://registry.terraform.io/modules/levmel/yaml_json/multidecoder/latest?tab=inputs
GitHub:
https://github.com/levmel/terraform-multidecoder-yaml_json
Usage
Place this module in the location where you need to access multiple different YAML and/or JSON files (different paths possible) and pass
your path/-s in the parameter filepaths which takes a set of strings of the relative paths of YAML and/or JSON files as an argument. You can change the module name if you want!
module "yaml_json_decoder" {
source = "levmel/yaml_json/multidecoder"
version = "0.2.1"
filepaths = ["routes/nsg_rules.yml", "failover/cosmosdb.json", "network/private_endpoints/*.yaml", "network/private_links/config_file.yml", "network/private_endpoints/*.yml", "pipeline/config/*.json"]
}
Patterns to access YAML and/or JSON files from relative paths:
To be able to access all YAML and/or JSON files in a folder entern your path as follows "folder/rest_of_folders/*.yaml", "folder/rest_of_folders/*.yml" or "folder/rest_of_folders/*.json".
To be able to access a specific YAML and/or a JSON file in a folder structure use this "folder/rest_of_folders/name_of_yaml.yaml", "folder/rest_of_folders/name_of_yaml.yml" or "folder/rest_of_folders/name_of_yaml.json"
If you like to select all YAML and/or JSON files within a folder, then you should use "*.yml", "*.yaml", "*.json" format notation. (see above in the USAGE section)
YAML delimiter support is available from version 0.1.0!
WARNING: Only the relative path must be specified. The path.root (it is included in the module by default) should not be passed, but everything after it.
Access YAML and JSON entries
Now you can access all entries within all the YAML and/or JSON files you've selected like that: "module.yaml_json_decoder.files.[name of your YAML or JSON file].entry". If the name of your YAML or JSON file is "name_of_your_config_file" then access it as follows "module.yaml_json_decoder.files.name_of_your_config_file.entry".
Example of multi YAML and JSON file accesses from different paths (directories)
first YAML file:
routes/nsg_rules.yml
rdp:
name: rdp
priority: 80
direction: Inbound
access: Allow
protocol: Tcp
source_port_range: "*"
destination_port_range: 3399
source_address_prefix: VirtualNetwork
destination_address_prefix: "*"
---
ssh:
name: ssh
priority: 70
direction: Inbound
access: Allow
protocol: Tcp
source_port_range: "*"
destination_port_range: 24
source_address_prefix: VirtualNetwork
destination_address_prefix: "*"
second YAML file:
services/logging/monitoring.yml
application_insights:
application_type: other
retention_in_days: 30
daily_data_cap_in_gb: 20
daily_data_cap_notifications_disabled: true
logs:
# Optional fields
- "AppMetrics"
- "AppAvailabilityResults"
- "AppEvents"
- "AppDependencies"
- "AppBrowserTimings"
- "AppExceptions"
- "AppExceptions"
- "AppPerformanceCounters"
- "AppRequests"
- "AppSystemEvents"
- "AppTraces"
first JSON file:
test/config/json_history.json
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}
main.tf
module "yaml_json_multidecoder" {
source = "levmel/yaml_json/multidecoder"
version = "0.2.1"
filepaths = ["routes/nsg_rules.yml", "services/logging/monitoring.yml", test/config/*.json]
}
output "nsg_rules_entry" {
value = module.yaml_json_multidecoder.files.nsg_rules.aks.ssh.source_address_prefix
}
output "application_insights_entry" {
value = module.yaml_json_multidecoder.files.monitoring.application_insights.daily_data_cap_in_gb
}
output "json_history" {
value = module.yaml_json_multidecoder.files.json_history.glossary.title
}
Changes to Outputs:
nsg_rules_entry = "VirtualNetwork"
application_insights_entry = 20
json_history = "example glossary"
I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU
I am testing adding a watermark to a video once uploaded. I am running into an issue where lamdba wants me to specify which file to change on upload. but i want it to trigger when any (really, any file that ends in .mov, .mp4, etc.) file is uploaded.
To clarify, this all works manually in creating a pipeline and job.
Here's my code:
require 'json'
require 'aws-sdk-elastictranscoder'
def lambda_handler(event:, context:)
client = Aws::ElasticTranscoder::Client.new(region: 'us-east-1')
resp = client.create_job({
pipeline_id: "15521341241243938210-qevnz1", # required
input: {
key: File, #this is where my issue
},
output: {
key: "CBtTw1XLWA6VSGV8nb62gkzY",
# thumbnail_pattern: "ThumbnailPattern",
# thumbnail_encryption: {
# mode: "EncryptionMode",
# key: "Base64EncodedString",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
# rotate: "Rotate",
preset_id: "1351620000001-000001",
# segment_duration: "FloatString",
watermarks: [
{
preset_watermark_id: "TopRight",
input_key: "uploads/2354n.jpg",
# encryption: {
# mode: "EncryptionMode",
# key: "zk89kg4qpFgypV2fr9rH61Ng",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
},
],
}
})
end
How do i specify just any file that is uploaded, or files that are a specific format? for the input: key: ?
Now, my issue is that i am using active storage so it doesn't end in .jpg or .mov, etc., it just is a random generated string (they have reasons for doing this). I am trying to find a reason to use active storage and this is my final step to making it work like other alternatives before it.
The extension field is Optional. If you don't specify anything in it, the lambda will be triggered no matter what file is uploaded. You can then check if it's the type of file you want and proceed.
I seem to have run into a syntax error on a previously-working puppet manifest. This is running on a local vagrant box with Ubuntu 12.04, and Puppet version 3.4.2. The puppet stuff was all generated at puphpet.com.
The error I'm getting is:
Error: Could not parse for environment production: Syntax error at '|'
at /tmp/vagrant-puppet/manifests/default.pp:263:29 on node
vagrant.example.com
Line 263 of default.pp is the second line of this snippet:
if count($php_values['ini']) > 0 {
$php_values['ini'].each { |$key, $value|
puphpet::ini { $key:
entry => "CUSTOM/${key}",
value => $value,
php_version => $php_values['version'],
webserver => $php_webserver_service
}
}
}
It looks like you haven't set parser to future.
Run this command:
puppet config print parser
If it returns current, you don't have access to the .each function. To change this, edit /etc/puppet/puppet.conf, and put parser = future under the [main] block. The above command should then return future.
Reference: http://docs.puppetlabs.com/references/latest/function.html#each
I wrote a custom fact, which returns a comma separated list of addr:port, like this:
sb_intl_conn => sbcms-t:22,sbsql05-wvuk-inst5:1434,sborc07-uk-t:1533,..,..,..
The number of elements in the string varies from node to node. I need to do a Nagios tcp-port-check on each of them. I think sb_intl_conn.split(",") will turn this string into an array and then how can I iterate over it to do something like this?
##nagios_service { "check_stat_${::fqdn}_${addr}_${port}":
use => 'generic-service',
check_command => "remote-nrpe-tcp-check!${addr}!${port}",
service_description => "V2::CON: ${addr} [Palms]",
display_name => "Connection check: ${addr}:${port}",
servicegroups => 'batch-worker',
hostgroup_name => 'batch-job',
}
Any help would be greatly appreciated. Cheers!!
Update: 1
I was tying to simulator iamauser's suggestion but not been able to get my head around it yet. This is what I did: in my foo.pp:
class test::foo {
define bar {
$var1 = inline_template("<%= scope.lookupvar($name).split(':').first.to_s.chomp %>")
$var2 = inline_template("<%= scope.lookupvar($name).split(':').last.to_s.chomp %>")
notify {"${var1}_${var2}": }
}
}
and then in my node.pp:
$ifs = ['abc.com:80','xyz.co.uk:1512']
test::foo::bar {$ifs:}
which throws in these error on the node:
err: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed to parse inline template: Could not find value for 'abc' in 65 at /etc/puppet/services/test/manifests/foo.pp:4 on node jobserver-01.local.cloud.uk
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run
Don't understand what I'm doing wrong. And why is Could not find value for 'abc'; not abc.com? Any idea?? Cheers!!
Update: 2
I ended up using heira and decided to give a try to the original "array of hashes" idea and having some problem implementing:
This is what I have in heira:
hiera -d -c /etc/puppet/hiera.yaml nag_chk m_env=talend s_env=local
[ ... ]
DEBUG: Thu Mar 21 12:28:02 +0000 2013: Got answer for key nagi_chk, final answer
DEBUG: Thu Mar 21 12:28:02 +0000 2013: Answer after outer loop = archimedes-db-02.svc.ft.com:1521 ftftp01-uvln-uk-p:22 www.google.com:80 ftaps01-lvpr-uk-local:8080
archimedes-db-02:1521 ftftp01-uvln-uk-p:22 google.com:80
Then, in my foo.pp
class test::foo {
define bar2 () {
$var1 = $name['addr']
$var2 = $name['port']
notify {"*** ${var1}_${var2} *********": }
}
}
and my node.pp:
$array_chk = hiera('nag_chk')
$urls = inline_template("<%= '$array_chk'.split(' ').map{|kv| v1,v2 = kv.split(':'); {'addr' => v1, 'port' => v2}} -%>")
test::foo::bar2 {$urls:}
and as usual, I get an error:
err: Could not retrieve catalog from remote server: Error 400 on
SERVER: name is not an hash or array when accessing it with 0 at
/etc/puppet/services/talend/talend/manifests/foo.pp:10 on node
talend-jobserver-01.local.cloud.ft.com warning: Not using cache on
failed catalog err: Could not retrieve catalog; skipping run
What's am I doing wrong? As far as I can see, the "array of hash" in right format in the irb console:
irb(main):001:0> STRING = "archimedes-db-02:1521 ftftp01-uvln-uk-p:22 google.com:80"
=> "archimedes-db-02:1521 ftftp01-uvln-uk-p:22 google.com:80"
irb(main):003:0>
irb(main):002:0> STRING.split(' ').map{|kv| v1,v2 = kv.split(':'); {'addr' => v1, 'port' => v2}}
=> [{"addr"=>"archimedes-db-02", "port"=>"1521"}, {"addr"=>"ftftp01-uvln-uk-p", "port"=>"22"}, {"addr"=>"google.com", "port"=>"80"}]
any further thought(s)? Cheers!!
This example may help solve your particular case.
$foo = [{"addr" => "bar", "port" => "1"},
{"addr" => "bat", "port" => "2"}]
testmod::bar {$foo:}
define testmod::bar () {
$var1 = $name["addr"]
$var2 = $name["port"]
notify {"${var1}_${var2}": }
}
Put the nagios type inside the define type. You may have to change the csv to a hash.
UPDATE: Added after #MacUsers update. The following works for me :
$foo = ["abc.com:80","xyz.co.uk:1512"]
testmod::bar {$foo:}
define testmod::bar () {
$var1 = inline_template("<%= '$name'.split(':').first.to_s.chomp %>")
$var2 = inline_template("<%= '$name'.split(':').last.to_s.chomp %>")
notify {"${var1}_${var2}": }
}
Running puppet agent gives me this :
Notice: /Stage[main]/Testmodule/Testmodule::Testmod::Bar[abc.com:80]/Notify[abc.com_80]/message: defined 'message' as 'abc.com_80'
Notice: xyz.co.uk_1512
Notice: /Stage[main]/Testmodule/Testmodule::Testmod::Bar[xyz.co.uk:1512]/Notify[xyz.co.uk_1512]/message: defined 'message' as 'xyz.co.uk_1512'