How to use OHAI attributes within a chef role that is defined in JSON format? - ruby

I want to know what is the correct way to assign node['ipaddress'] OHAI attribute within the chef role. My chef role is defined in the JSON format.
{
"name": "temp_role",
"description": "This is temp role",
"json_class": "Chef::Role",
"default_attributes": {
"client_addr": #{node['ipaddress']}
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"recipe[test::prereq]"
],
"env_run_lists": {
}
}

You cannot use ohai data in roles, JSON format or otherwise. Roles are purely static data, converted to JSON during upload even if they are using the .rb DSL. anything dynamic must live in a cookbook, in this case probably a role-pattern cookbook.

Related

fetch attribute type from terraform provider schema

am trying to find out a way to fetch the attribute type of a resource/data_source from a terraform providers schema (am currently using gcp, but will be extending to pretty much all providers).
My current flow of setup
Am running the terraform providers schema -json to fetch the providers schema
This will generate a huge json file with the schema structure of the provider
ref:
How to get that list of all the available Terraform Resource types for Google Cloud?
https://developer.hashicorp.com/terraform/cli/commands/providers/schema
And from this am trying to fetch the type of each attribute eg below
`
"google_cloud_run_service": {
"version": 1,
"block": {
"attributes": {
"autogenerate_revision_name": {
"type": "bool",
"description_kind": "plain",
"optional": true
},
`
4) My end goal is to generate variables.tf from the above schema for all resources and all attributes supported in that resource along with the type constraint
ref: https://developer.hashicorp.com/terraform/language/values/variables
I already got some help on how we can generate that
ref: Get the type of value using cty in hclwrite
Now the challenge is to work on complex structures like below
The below is one of the attributes of "google_cloud_run_service".
`
"status": {
"type": [
"list",
[
"object",
{
"conditions": [
"list",
[
"object",
{
"message": "string",
"reason": "string",
"status": "string",
"type": "string"
}
]
],
"latest_created_revision_name": "string",
"latest_ready_revision_name": "string",
"observed_generation": "number",
"url": "string"
}
]
],
"description": "The current status of the Service.",
"description_kind": "plain",
"computed": true
}
`
7) so based on the above complex structure type, I want to generate the variables.tf file for this kind of attribute using the code sample from point #5, and the desired output should look something like below in variables.tf
`
variable "autogenerate_revision_name" {
type = string
default = ""
description = "Sample description"
}
variable "status" {
type = list(object({
conditions = list(object({
"message" = string
"reason" = string
"status" = string
" type" = string
}))
"latest_created_revision_name" = string
"latest_ready_revision_name" = string
"observed_generation" = number
"url" = string
}))
default = "default values in the above type format"
}
`
The above was manually written - so might not exactly align with the schema, but i hope i made it understood , as to what am trying to achieve.
The first variable in the above code is from the first eg i gave in point #3 which is easy to generate, but the second eg in point #6 is a complex type constraint and am seeking help to get this generated
Is this possible to generate using the helper schema sdk (https://pkg.go.dev/github.com/hashicorp/terraform-plugin-sdk/v2#v2.24.0/helper/schema) ? along with code eg given in point #5 ?
Summary : Am generating json schema of a terraform provider using terraform providers schema -json, am reading that json file and generating hcl code for each resource, but stuck with generating type constraints for the attributes/variables, hence seeking help on this.
Any sort of help is really appreciated as am stuck at this for quite a while.
If you've come this far, then i thank you for reading such a lengthy question, and any sort of pointers are welcome.

Can I create extended properties in Google People API and Task API?

I have added an extended property to a Google calendar entry and been able to read it back successfully. The format of the json is like this:
"extendedProperties": {
"private": {
"MyPropertyName": "yes"
}
},
I want to do the same thing to created Task entries and contact entries (via the People API). With the People API, trying to create the entry results in http 400. With the Task API, it accepts the json, but the property is not returned when I retrieve the task.
Is it possible to do what I want with the current versions of the People and Task API?
In People API extended properties are called ClientData
The json structure of the resouce is:
{
"metadata": {
object (FieldMetadata)
},
"key": string,
"value": string
}
with FieldMetadata:
{
"primary": boolean,
"sourcePrimary": boolean,
"verified": boolean,
"source": {
object (Source)
}
}

How do I use FreeFormTextRecordSetWriter

I my Nifi controller I want to configure the FreeFormTextRecordSetWriter, but I have no Idea what I should put in the "Text" field. I'm getting the text from my source (in my case GetSolr), and just want to write this, period.
Documentation and mailinglist do not seem to tell me how this is done, any help appreciated.
EDIT: Here the sample input + output I want to achieve (as you can see: not ransformation needed, plain text, no JSON input)
EDIT: I now realize, that I can't tell GetSolr to return just CSV data - but I have to use Json
So referencing with attribute seems to be fine. What the documentation omits is, that the ${flowFile} attribute should containt the complete flowfile that is returned.
Sample input:
{
"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 0,
"params": {
"q": "*:*",
"_": "1553686715465"
}
},
"response": {
"numFound": 3194,
"start": 0,
"docs": [
{
"id": "{402EBE69-0000-CD1D-8FFF-D07756271B4E}",
"MimeType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"FileName": "Test.docx",
"DateLastModified": "2019-03-27T08:05:00.103Z",
"_version_": 1629145864291221504,
"LAST_UPDATE": "2019-03-27T08:16:08.451Z"
}
]
}
}
Wanted output
{402EBE69-0000-CD1D-8FFF-D07756271B4E}
BTW: The documentation says this:
The text to use when writing the results. This property will evaluate the Expression Language using any of the fields available in a Record.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
I want to use my source's text, so I'm confused
You need to use expression language as if the record's fields are the FlowFile's attributes.
Example:
Input:
{
"t1": "test",
"t2": "ttt",
"hello": true,
"testN": 1
}
Text property in FreeFormTextRecordSetWriter:
${t1} k!${t2} ${hello}:boolean
${testN}Num
Output(using ConvertRecord):
test k!ttt true:boolean
1Num
EDIT:
Seems like what you needed was reading from Solr and write a single column csv. You need to use CSVRecordSetWriter. As for the same,
I should tell you to consider to upgrade to 1.9.1. Starting from 1.9.0, the schema can be inferred for you.
otherwise, you can set Schema Access Strategy as Use 'Schema Text' Property
then, use the following schema in Schema Text
{
"name": "MyClass",
"type": "record",
"namespace": "com.acme.avro",
"fields": [
{
"name": "id",
"type": "int"
}
]
}
this should work
I'll edit it into my answer. If it works for you, please choose my answer :)

kinesis agent to lambda, how to get origin file and server

I have a kinesis agent that streams a lot of log files information to kinesis streams and I have a Lambda function that parses the info.
On Lambda in addition to the string I need to know the source file name an machine name is it possible?
You can add it to the data that you send to Kinesis.
Lambda gets Kinesis records as base64 string, you can encode to this string a JSON of this form:
{
"machine": [machine],
"data": [original data]
}
And then, when processing the records on Lambda: (nodejs):
let record_object = JSON.parse(new Buffer(event.Records[0].kinesis.data, 'base64').toString('utf8'));
let machine = record_object.machine;
let data = record_object.data;
Assuming you are using Kinesis Agent to produce data stream. I see that the opensource community has added ADDEC2METADATA as a preprocessing option in the agent. The source code
Make sure that the source content file is of JSON format. If the original format is CSV then use the CSVTOJSON transformer first to convert it to JSON and then pipe it to ADDEC2METADATA transformer as shown below.
Open agent.json and add the following:
"flows": [
{
"filePattern": "/tmp/app.log*",
"kinesisStream": "my-stream",
"dataProcessingOptions": [
{
"optionName": "CSVTOJSON",
"customFieldNames": ["your", "custom", "field", "names","here", "if","origin","file","is","csv"],
"delimiter": ","
},
{
"optionName": "ADDEC2METADATA",
"logFormat": "RFC3339SYSLOG"
}
]
}
]
}
If your code is running out of a container/ECS/EKS etc. where the originating info is not as simple as collecting info about bare-metal EC2, then use "ADDMETADATA" declarative as shown below in the agent.log file:
{
"optionName": "ADDMETADATA",
"timestamp": "true/false",
"metadata": {
"key": "value",
"foo": {
"bar": "baz"
}
}
}

How do I access data from a JSON file in Ruby tests?

I have a JSON file:
{
"user": "John",
"first": "John",
"last": "Wilson",
"updated": "2013-02-17",
"generated_at": "2013-02-13",
"version": 1.1,
}
I want to use this as the data file for my Ruby test and want to access the data in this file. I am doing the data verification as:
application[data_first]. should eq 'John'
I want to refer to the expected data from the JSON file using something like:
application[data_first]. should eq JSON_file[first]
Assuming the JSON file's name is "JSON_file", I also have added require 'JSON_file' at the top of my test script.
How do I access the data from JSON file?

Resources