Refactor CHEF cookbooks - ruby

I have a CHEF recipe for package install on windows hosts. We used to use chef13 client; however we have upgraded to chef16, and we are getting some issues with the cookbook migration.
When I run the existing role as-it-is on a win2k16 chef16 kitchen platform, I get this error,
Chef::Exceptions::ImmutableAttributeModification
------------------------------------------------
Node attributes are read-only when you do not specify which precedence level to set. To set an attribute use code like `node.default["key"] = "value"'
and it points out to this section of the recipe
67: guid = guid.gsub(/{/, '').gsub(/}/, '') # remove {}
68: log "[#{role}][#{recipe_name}][#{app_name}]: GUID for installer #{app_name} is #{guid}"
69>> app_params.store('product_guid', guid) # add GUID to the list of parameters
70: end
Hence, what I understand that chef16 dosen't like the hash.store method.
I have tried some other alternate solutions to update the hash, however they do not work.
Using merge method:-
app_params.merge!({ :product_guid => 'guid' })
Using precedence level explicitly:- ( I have used .default, .override as well, however its the same symptom)
node.force_override!['app_params']['product_guid'] = guid
(reff from : https://github.com/chef/chef/issues/6563)
Note that when using the above methods, i do not get any failures, however the hash dosent get updated at all.
I have added few log statements to show this:-
log "[#{role}][#{recipe_name}][#{app_name}]: app_params.keys is #{app_params.keys}"
log "[#{role}][#{recipe_name}][#{app_name}]: app_params is #{JSON.pretty_generate(app_params)}"
* log[[my-role][my-recipe][my-pkgName]: app_params.keys is ["app_name", "full_path", "action"]] action write
* log[[my-role][my-recipe][my-pkgName]: app_params is {
"app_name": "my-pkgName",
"full_path": "https://myArtifactRepo/.../.../../my-pkgName.msi",
"action": "install"
}] action write
## ^^ hash keys and hash values (app_params.keys and app_params) BEFORE the force_override method is used
log "[#{role}][#{recipe_name}][#{app_name}]: app_params.keys is #{app_params.keys}"
log "[#{role}][#{recipe_name}][#{app_name}]: app_params is #{JSON.pretty_generate(app_params)}"
* log[[my-role][my-recipe][my-pkgName]: app_params.keys is ["app_name", "full_path", "action"]] action write
* log[[my-role][my-recipe][my-pkgName]: app_params is {
"app_name": "my-pkgName",
"full_path": "https://myArtifactRepo/.../.../../my-pkgName.msi",
"action": "install"
}] action write
## ^^ hash keys and hash values (app_params.keys and app_params) AFTER the force_override method is used
Note that the hash and hash key values has not been updated it is still,
["app_name", "full_path", "action"];
however we would expect
["app_name", "full_path", "action", "product_guid"]
and the hash itself like,
"app_name": "my-pkgName",
"full_path": "https://myArtifactRepo/.../.../../my-pkgName.msi",
"action": "install",
"product_guid" : XXXXYYY-0WRR-1234-ABCD-3ERDFR234GRT

Related

Using terraform yamldecode to access multi level element

I have a yaml file (also used in a azure devops pipeline so needs to be in this format) which contains some settings I'd like to directly access from my terraform module.
The file looks something like:
variables:
- name: tenantsList
value: tenanta,tenantb
- name: unitName
value: canary
I'd like to have a module like this to access the settings but I can't see how to get to the bottom level:
locals {
settings = yamldecode(file("../settings.yml"))
}
module "infra" {
source = "../../../infra/terraform/"
unitname = local.settings.variables.unitName
}
But the terraform plan errors with this:
Error: Unsupported attribute
on canary.tf line 16, in module "infra":
16: unitname = local.settings.variables.unitName
|----------------
| local.settings.variables is tuple with 2 elements
This value does not have any attributes.
It seems like the main reason this is difficult is because this YAML file is representing what is logically a single map but is physically represented as a YAML list of maps.
When reading data from a separate file like this, I like to write an explicit expression to normalize it and optionally transform it for more convenient use in the rest of the Terraform module. In this case, it seems like having variables as a map would be the most useful representation as a Terraform value, so we can write a transformation expression like this:
locals {
raw_settings = yamldecode(file("${path.module}/../settings.yml"))
settings = {
variables = tomap({
for v in local.raw_settings.variables : v.name => v.value
})
}
}
The above uses a for expression to project the list of maps into a single map using the name values as the keys.
With the list of maps converted to a single map, you can then access it the way you originally tried:
module "infra" {
source = "../../../infra/terraform/"
unitname = local.settings.variables.unitName
}
If you were to output the transformed value of local.settings as YAML, it would look something like this, which is why accessing the map elements directly is now possible:
variables:
tenantsList: tenanta,tenantb
unitName: canary
This will work only if all of the name strings in your input are unique, because otherwise there would not be a unique map key for each element.
(Writing a normalization expression like this also doubles as some implicit validation for the shape of that YAML file: if variables were not a list or if the values were not all of the same type then Terraform would raise a type error evaluating that expression. Even if no transformation is required, I like to write out this sort of expression anyway because it serves as some documentation for what shape the YAML file is expected to have, rather than having to study all of the references to it throughout the rest of the configuration.)
With my multidecoder for YAML and JSON you are able to access multiple YAML and/or JSON files with their relative paths in one step.
Documentations can be found here:
Terraform Registry -
https://registry.terraform.io/modules/levmel/yaml_json/multidecoder/latest?tab=inputs
GitHub:
https://github.com/levmel/terraform-multidecoder-yaml_json
Usage
Place this module in the location where you need to access multiple different YAML and/or JSON files (different paths possible) and pass
your path/-s in the parameter filepaths which takes a set of strings of the relative paths of YAML and/or JSON files as an argument. You can change the module name if you want!
module "yaml_json_decoder" {
source = "levmel/yaml_json/multidecoder"
version = "0.2.1"
filepaths = ["routes/nsg_rules.yml", "failover/cosmosdb.json", "network/private_endpoints/*.yaml", "network/private_links/config_file.yml", "network/private_endpoints/*.yml", "pipeline/config/*.json"]
}
Patterns to access YAML and/or JSON files from relative paths:
To be able to access all YAML and/or JSON files in a folder entern your path as follows "folder/rest_of_folders/*.yaml", "folder/rest_of_folders/*.yml" or "folder/rest_of_folders/*.json".
To be able to access a specific YAML and/or a JSON file in a folder structure use this "folder/rest_of_folders/name_of_yaml.yaml", "folder/rest_of_folders/name_of_yaml.yml" or "folder/rest_of_folders/name_of_yaml.json"
If you like to select all YAML and/or JSON files within a folder, then you should use "*.yml", "*.yaml", "*.json" format notation. (see above in the USAGE section)
YAML delimiter support is available from version 0.1.0!
WARNING: Only the relative path must be specified. The path.root (it is included in the module by default) should not be passed, but everything after it.
Access YAML and JSON entries
Now you can access all entries within all the YAML and/or JSON files you've selected like that: "module.yaml_json_decoder.files.[name of your YAML or JSON file].entry". If the name of your YAML or JSON file is "name_of_your_config_file" then access it as follows "module.yaml_json_decoder.files.name_of_your_config_file.entry".
Example of multi YAML and JSON file accesses from different paths (directories)
first YAML file:
routes/nsg_rules.yml
rdp:
name: rdp
priority: 80
direction: Inbound
access: Allow
protocol: Tcp
source_port_range: "*"
destination_port_range: 3399
source_address_prefix: VirtualNetwork
destination_address_prefix: "*"
---
ssh:
name: ssh
priority: 70
direction: Inbound
access: Allow
protocol: Tcp
source_port_range: "*"
destination_port_range: 24
source_address_prefix: VirtualNetwork
destination_address_prefix: "*"
second YAML file:
services/logging/monitoring.yml
application_insights:
application_type: other
retention_in_days: 30
daily_data_cap_in_gb: 20
daily_data_cap_notifications_disabled: true
logs:
# Optional fields
- "AppMetrics"
- "AppAvailabilityResults"
- "AppEvents"
- "AppDependencies"
- "AppBrowserTimings"
- "AppExceptions"
- "AppExceptions"
- "AppPerformanceCounters"
- "AppRequests"
- "AppSystemEvents"
- "AppTraces"
first JSON file:
test/config/json_history.json
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}
main.tf
module "yaml_json_multidecoder" {
source = "levmel/yaml_json/multidecoder"
version = "0.2.1"
filepaths = ["routes/nsg_rules.yml", "services/logging/monitoring.yml", test/config/*.json]
}
output "nsg_rules_entry" {
value = module.yaml_json_multidecoder.files.nsg_rules.aks.ssh.source_address_prefix
}
output "application_insights_entry" {
value = module.yaml_json_multidecoder.files.monitoring.application_insights.daily_data_cap_in_gb
}
output "json_history" {
value = module.yaml_json_multidecoder.files.json_history.glossary.title
}
Changes to Outputs:
nsg_rules_entry = "VirtualNetwork"
application_insights_entry = 20
json_history = "example glossary"

terraform: lambda invocation during destroy

I have lambda invocation in our terraform-built environment:
data "aws_lambda_invocation" "this" {
count = var.invocation == "true" ? 1 : 0
function_name = aws_lambda_function.this.function_name
input = <<JSON
{
"Name": "Invocation"
}
JSON
}
The problem: the function is invoked not only during creation ("apply") but deletion ("destroy") too. How to invoke it during creation only? I thought about checking environment variables in the lambda (perhaps TF adds name of the process here or something like that) but I hope there's a better way.
Worth checking if you can use the -var 'lambda_xxx=execute' option while running the terraform command to check if the lambda code needs to be executed or not terraform docs
Using that variable lambda_xxx passed in via the command line while executing the command, you can check in the terraform code whether you want to run the lambda code or not.
Below code creates a waf only if the count is 1
resource "aws_waf_rule" "wafrule" {
depends_on = ["aws_waf_ipset.ipset"]
name = "${var.environment}-WAFRule"
metric_name = "${replace(var.environment, "-", "")}WAFRule"
count = "${var.is_waf_enabled == "true" ? 1 : 0}"
predicates {
data_id = "${aws_waf_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
Variable declared in variables.tf file
variable "is_waf_enabled" {
type = "string"
default = "false"
description = "String value to indicate if WAF/API KEY is turned on or off (true/any_value)"
}
When you run the command any value other than true is considered false as we are just checking for string true.
Similarly you can do this for your lambda.
There are better alternative solutions for this problem now, which weren't available at the time the question was asked.
Lambda Invocation Resource in AWS provider: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_invocation
Lambda Based Resource in LambdaBased provider: https://registry.terraform.io/providers/thetradedesk/lambdabased/latest/docs/resources/lambdabased_resource
With the disclaimer that I'm the developer of the latter one: If the underlying problem is managing a resource through lambda functions, the lambda based resource has some good features tailored specifically to accomplish that with the obvious drawback of adding another provider dependency.

How to access JSON from external data source in Terraform?

I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU

Puppet 6 and module puppetlabs/accounts does not create user account in Hiera YAML format

When I run puppet agent --test I have no errors output but the user did not create.
My puppet hira.yaml configuration is:
---
version: 5
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
users.yaml is:
accounts::user:
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
I use this module
Nothing in Hiera data itself causes anything to be applied to target nodes. Some kind of declaration is required in a manifest somewhere or in the output of an external node classifier script. Moreover, the puppetlabs/accounts module provides only defined types, not classes. You can store defined-type data in Hiera and read it back, but automated parameter binding via Hiera applies only to classes, not defined types.
In short, then, no user is created (and no error is reported) because no relevant resources are declared into the target node's catalog. You haven't given Puppet anything to do.
If you want to apply the stored user data presented to your nodes, you would want something along these lines:
$user_data = lookup('accounts::user', Hash[String,Hash], 'hash', {})
$user_data.each |$user,$props| {
accounts::user { $user: * => $props }
}
That would go into the node block matched to your target node, or, better, into a class that is declared by that node block or an equivalent. It's fairly complicated for so few lines, but in brief:
the lookup function looks up key 'accounts::user' in your Hiera data
performing a hash merge of results appearing at different levels of the hierarchy
expecting the result to be a hash with string keys and hash values
and defaulting to an empty hash if no results are found;
the mappings in the result hash are iterated, and for each one, an instance of the accounts::user defined type is declared
using the (outer) hash key as the user name,
and the value associated with that key as a mapping from parameter names to parameter values.
There are a few problems here.
You are missing a line in your hiera.yaml namely the defaults key. It should be:
---
version: 5
defaults: ## add this line
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
I detected that using the puppet-syntax gem (included if you use PDK, which is recommended):
▶ bundle exec rake validate
Syntax OK
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml
ERROR: Failed to parse hiera.yaml: (hiera.yaml): mapping values are not allowed in this context at line 3 column 10
Also, in addition to what John mentioned, the simplest class to read in your data would be this:
class test (Hash[String,Hash] $users) {
create_resources(accounts::user, $users)
}
Or if you want to avoid using create_resources*:
class test (Hash[String,Hash] $users) {
$users.each |$user,$props| {
accounts::user { $user: * => $props }
}
}
Note that I have relied on the Automatic Parameter Lookup feature for that. See the link below.
Then, in your Hiera data, you would have a key named test::users to correspond (class name "test", key name "users"):
---
test::users: ## Note that this line changed.
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
Use of automatic parameter lookup is generally the more idiomatic way of writing Puppet code compared to calling the lookup function explicitly.
For more info:
PDK
Automatic Parameter Lookup
create_resources
(*Note that create_resources is "controversial". Many in the Puppet community prefer not to use it.)

Aws lambda code explanation

Can anybody please explain the working of the below code.
"def lambda_handlerOut(event, context):
if len(event) > 0:
success=1
print("length of event outside for--"+str(len(event)))
for record in event['Records']:
print("length of event--"+str(len(event)))
bucket=record['s3']['bucket']['name']
key=record['s3']['object']['key']
print("Bucket--"+bucket)
print("File that triggered this event--"+key)
Thanks in advance.
Regards,
Eleena Jose
This is a Lambda that receives S3 events - for example a PutObject request that creates a new file.
The method is the standard Python function - take a look at the Lambda Function Handler Docs for more details.
The structure of the event is defined here but basically there are some number of Records that are being iterated through with and, for each record, the bucket and key are being extracted and printed.
So, in more detail (comments above the line they reference):
# standard lambda event handler definition
def lambda_handlerOut(event, context):
# make sure that something was given - likely unneeded
if len(event) > 0:
success=1
print("length of event outside for--"+str(len(event)))
# loop through each record in Records
for record in event['Records']:
print("length of event--"+str(len(event)))
# take a look at the event structure - just extracting parts
bucket=record['s3']['bucket']['name']
# key is the object name - that is, the file
key=record['s3']['object']['key']
print("Bucket--"+bucket)
print("File that triggered this event--"+key)
EDIT
As I linked to above, the data in the event object looks something like:
{
"Records":[
{
"eventVersion":"2.0",
"eventSource":"aws:s3",
"awsRegion":"us-east-1",
"eventTime":"1970-01-01T00:00:00.000Z",
"eventName":"ObjectCreated:Put",
"userIdentity":{
"principalId":"AIDAJDPLRKLG7UEXAMPLE"
},
"requestParameters":{
"sourceIPAddress":"127.0.0.1"
},
"responseElements":{
"x-amz-request-id":"C3D13FE58DE4C810",
"x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko",
"sequencer":"0055AED6DCD90281E5"
}
}
}
]
}
So, as an example, bucket=record['s3']['bucket']['name'] starts by getting the s3 record from the data which leaves:
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko",
"sequencer":"0055AED6DCD90281E5"
}
}
From there, it gets the bucket stanza:
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
}
and lastly, the name:
"name":"mybucket"
This is assigned to the variable bucket which is printed out later. The key (which is the file name in this example) works the same way but gets different parts of the event.
Does that make sense now?

Resources