I am looking for a way to dynamically set the key using the path of the file below.
For example if I have this YAML:
prospectors.config:
- fields:
queue_name: <somehow get the globbed string below in here>
paths:
- /var/log/casino/*.log
type: log
output.redis:
hosts:
- "producer:6379"
key: "%{[fields.queue_name]}"
And then I had a file called /var/log/casino/test.log, then key would become test.
Im not sure that what you want is possible.
You could use the source field and configure your Redis output using that as the key:
output.redis:
hosts:
- "producer:6379"
key: "%{source}"
This would have the disadvantage of being the absolute path of the source file, not the basename as your question asks for.
If you have a small number of possible basename patterns, and want a queue for each. For example, you have files:
/common/path/test-1.log
/common/path/foo-0.log
/common/path/01-bar.log
/common/path/test-3.log
...
and wanted to have three queues in redis test, foo and bar you could use the source field and the conditionals available in the keys configuration of redis output something like this
output.redis:
hosts:
- "producer:6379"
key: "default_key"
keys:
- key: "test_key"
when.contains:
source: "test"
- key: "foo_key"
when.contains:
source: "foo"
- key: "bar_key"
when.contains:
source: "bar"
Related
I have a strange problem that I can't seem to get my head around. I am trying to define some variables for use as part of the job that will deploy bicep files via Azure CLI and execute PowerShell tasks.
I get this validation error when I try and execute the pipeline: While parsing a block mapping, did not find expected key
The line that it refers to is: - name: managementResourceDNSPrivateResolverName
On the research that I have done on this problem, it sounds like an indentation problem but on the face of it, it seems to look fine.
jobs:
- job: 'Deploy_Management_Resources'
pool:
vmImage: ${{ parameters.buildAgent }}
variables:
- name: managementResourceDNSPrivateResolverName
value: 'acme-$[ lower(parameters['environmentCode']) ]-$[ lower(variables['resourceLocationShort']) ]-private-dns-resolver'
- name: managementResourceGroupManagement
value: 'acme-infrastructure-rg-management'
- name: managementResourceRouteTableName
value: 'acme-$[ lower(variables['subscriptionCode']) ]-$[ lower(variables['resourceLocationShort']) ]-route-table'
- name: managementResourceVirtualNetworkName
value: 'acme-$[ lower(variables['subscriptionCode']) ]-$[ lower(variables['resourceLocationShort']) ]-vnet-internal-mng'
Thanks!
The error message ...parsing a block mapping, did not find expected key is usually a side-effect of malformed yaml. You'll see if often with variables if you have mixed formats of arrays and property elements
variables: # an array of objects
# variable group reference object
- group: myvariablegroup
# variable template reference object
- template: my-variables.yml
# variable object
- name: myVariable
value: 'value1'
# variable shorthand syntax
myVariable: 'value1' # this fails because it's a property instead of an array element
While it doesn't appear that the sample you've provided is malformed, I am curious about the use of $[ ] which is a runtime expression. The expression $[ lower(parameters['environmentcode']) ] refers to parameters which is are only available at compile time.
Change:
$[ lower(parameters['environmentCode']) ] to ${{ lower(parameters.environmentCode) }}
This is my first time working with YAML, but I am running into an issue where it seems like if I want to include a variable-group (i.e., signing certificate password) with local pipeline-related variables then I cannot use the simplified naming convention where the variable's name and value can both be defined and set on the same line.
For example, wat I want is something similar to this (I made sure spacing is correct in the YAML):
variables:
solutionName: Foo.sln
projectName: Bar
buildPlatform: x64
buildConfiguration: development
major: '1'
minor: '0'
build: '0'
revision: $[counter('rev', 0)]
vhdxSize: '200'
- group: legacy-pipeline
signingCertPwd: $[variables.SigningCertificatePassword]
But, this results in a parsing error. As a result, I have to use a denser, but more bloated looking, format of:
variables:
- name: solutionName
value: Foo.sln
- name: projectName
value: Bar
- name: buildPlatform
value: x64
- name: buildConfiguration
value: development
- name: major
value: '1'
- name: minor
value: '0'
- name: build
value: '0'
- name: revision
value: $[counter('rev', 0)]
- name: vhdxSize
value: '200'
- group: legacy-pipeline
- name: signingCertPwd
value: $[variables.SigningCertificatePassword]
It seems like the simplified naming format is only available if I use it for local variables, but if I add a variable-group then the simplified format goes away. I have tried searching the web for a solution for this, but I am not able find anything useful for this. Is what I am trying to achieve possible or no? If yes, how can it be done?
Unfortunately mixing the styles is not possible, but you can work around that using templates:
# pipeline.yaml
stages:
- stage: declare_vars
variables:
- template: templates/vars.yaml
- group: my-group
- template: templates/inline-vars.yaml
parameters:
vars:
inline_var: yes!
and_more: why not
jobs:
- job:
steps:
- pwsh: |
echo 'foo=$(foo)'
echo 'bar=$(bar)'
echo 'var1=$(var1)'
echo 'inline_var=$(inline_var)'
# templates/vars.yaml
variables:
foo: bar
bar: something else
# templates/inline-vars.yaml
parameters:
- name: vars
type: object
default: {}
variables:
${{ each var in parameters.vars}}:
${{var.key}}: ${{var.value}}
templates/vars.yaml is just simply moving variables to another file.
templates/inline-vars.yaml lets you define inline variables using the denser syntax together with referencing groups, but there's additional ceremony of writing template:, parameters:, vars:.
When I run puppet agent --test I have no errors output but the user did not create.
My puppet hira.yaml configuration is:
---
version: 5
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
users.yaml is:
accounts::user:
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
I use this module
Nothing in Hiera data itself causes anything to be applied to target nodes. Some kind of declaration is required in a manifest somewhere or in the output of an external node classifier script. Moreover, the puppetlabs/accounts module provides only defined types, not classes. You can store defined-type data in Hiera and read it back, but automated parameter binding via Hiera applies only to classes, not defined types.
In short, then, no user is created (and no error is reported) because no relevant resources are declared into the target node's catalog. You haven't given Puppet anything to do.
If you want to apply the stored user data presented to your nodes, you would want something along these lines:
$user_data = lookup('accounts::user', Hash[String,Hash], 'hash', {})
$user_data.each |$user,$props| {
accounts::user { $user: * => $props }
}
That would go into the node block matched to your target node, or, better, into a class that is declared by that node block or an equivalent. It's fairly complicated for so few lines, but in brief:
the lookup function looks up key 'accounts::user' in your Hiera data
performing a hash merge of results appearing at different levels of the hierarchy
expecting the result to be a hash with string keys and hash values
and defaulting to an empty hash if no results are found;
the mappings in the result hash are iterated, and for each one, an instance of the accounts::user defined type is declared
using the (outer) hash key as the user name,
and the value associated with that key as a mapping from parameter names to parameter values.
There are a few problems here.
You are missing a line in your hiera.yaml namely the defaults key. It should be:
---
version: 5
defaults: ## add this line
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
I detected that using the puppet-syntax gem (included if you use PDK, which is recommended):
▶ bundle exec rake validate
Syntax OK
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml
ERROR: Failed to parse hiera.yaml: (hiera.yaml): mapping values are not allowed in this context at line 3 column 10
Also, in addition to what John mentioned, the simplest class to read in your data would be this:
class test (Hash[String,Hash] $users) {
create_resources(accounts::user, $users)
}
Or if you want to avoid using create_resources*:
class test (Hash[String,Hash] $users) {
$users.each |$user,$props| {
accounts::user { $user: * => $props }
}
}
Note that I have relied on the Automatic Parameter Lookup feature for that. See the link below.
Then, in your Hiera data, you would have a key named test::users to correspond (class name "test", key name "users"):
---
test::users: ## Note that this line changed.
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
Use of automatic parameter lookup is generally the more idiomatic way of writing Puppet code compared to calling the lookup function explicitly.
For more info:
PDK
Automatic Parameter Lookup
create_resources
(*Note that create_resources is "controversial". Many in the Puppet community prefer not to use it.)
I have a commandline tool in cwl which can take the following input:
fastq:
class: File
path: /path/to/fastq_R1.fq.gz
fastq2:
class: File
path: /path/to/fastq_R2.fq.gz
sample_name: foo
Now I want to scatter over this commandline tool and the only way I can think to do it is with scatterMethod: dotproduct and an input of something like:
fastqs:
- class: File
path: /path/to/fastq_1_R1.fq.gz
- class: File
path: /path/to/fastq_2_R1.fq.gz
fastq2s:
- class: File
path: /path/to/fastq_1_R2.fq.gz
- class: File
path: /path/to/fastq_2_R2.fq.gz
sample_names: [foo, bar]
Is there any other way for me to design the workflow and / or input file such that each input group is sectioned together? Something like
paired_end_fastqs:
- fastq:
class: File
path: /path/to/fastq_1_R1.fq.gz
fastq2:
class: File
path: /path/to/fastq_1_R2.fq.gz
sample_name: foo
- fastq:
class: File
path: /path/to/fastq_2_R1.fq.gz
fastq2:
class: File
path: /path/to/fastq_2_R2.fq.gz
sample_name: bar
You could accomplish this with a nested workflow wrapper that maps a record to each individual field, and then using that workflow to scatter over an array of records. The workflow would look something like this:
---
class: Workflow
cwlVersion: v1.0
id: workflow
inputs:
- id: paired_end_fastqs
type:
type: record
name: paired_end_fastqs
fields:
- name: fastq
type: File
- name: fastq2
type: File
- name: sample_name
type: string
outputs: []
steps:
- id: tool
in:
- id: fastq
source: paired_end_fastqs
valueFrom: "$(self.fastq)"
- id: fastq2
source: paired_end_fastqs
valueFrom: "$(self.fastq2)"
- id: sample_name
source: paired_end_fastqs
valueFrom: "$(self.sample_name)"
out: []
run: "./tool.cwl"
requirements:
- class: StepInputExpressionRequirement
Specify a workflow input of type record, with fields for each of the inputs the tool accepts, the ones you want to keep together while scattering. Connect the workflow input to each tool input, its source. Using valueFrom on the step inputs, transform the record (self is the source this context) to pass only the appropriate field to the tool.
More about valueFrom in workflow steps: https://www.commonwl.org/v1.0/Workflow.html#WorkflowStepInput
Then use this wrapper in your actual workflow and scatter over paired_end_fastqs with an array of records.
When I review the cryptogen(a fabric command) config file . I saw there symbol.
Profiles:
SampleInsecureSolo:
Orderer:
<<: *OrdererDefaults ## what is the `<<`
Organizations:
- *ExampleCom ## what is the `*`
Consortiums:
SampleConsortium:
Organizations:
- *Org1ExampleCom
- *Org2ExampleCom
Above there a two symbol << and *.
Application: &ApplicationDefaults # what is the `&` mean
Organizations:
As you can see there is another symbol &.
I don't know what are there mean. I didn't get any information even by reviewing the source code (fabric/common/configtx/tool/configtxgen/main.go)
Well, those are elements of the YAML file format, which is used here to provide a configuration file for configtxgen. The "&" sign mean anchor and "*" reference to the anchor, this is basically used to avoid duplication, for example:
person: &person
name: "John Doe"
employee: &employee
<< : *person
salary : 5000
will reuse fields of person and has similar meaning as:
employee: &employee
name : "John Doe"
salary : 5000
another example is simply reusing value:
key1: &key some very common value
key2: *key
equivalent to:
key1: some very common value
key2: some very common value
Since abric/common/configtx/tool/configtxgen/main.go uses of the shelf YAML parser you won't find any reference to these symbols in configtxgen related code. I would suggest to read a bit more about YAML file format.
in yaml if data is like
user: &userId '123'
username: *userId
equivalent yml is
user: '123'
username: '123'
or
equivalent json will is
{
"user": "123",
"username": "123"
}
so it basically allows to reuse data, you can also try with array instead of single value like 123
try converting below yml to json using any yml to json online converter
users: &users
k1: v1
k2: v2
usernames: *users