Yaml - how to write mapping(s) in single line - yaml

I have the following yaml.
version: "0.1"
services:
svc:
image: test
networks:
- test_net_1
- test_net_2
- test_net_3
networkMapping:
test_net_1:
external: true
test_net_2:
external: true
test_net_3:
external: true
I would like to rewrite the networkMapping in a single line like the following
version: "0.2"
services:
svc:
image: test
networks: ['test_net_1', 'test_net_2', 'test_net_3']
networkMapping: {{'test_net_1': {'external': true}}, {'test_net_2': {'external': true}}, {'test_net_3': {'external': true}}}
but when on lint/parse it returns like this
version: "0.2"
services:
svc:
image: test
networks:
- test_net_1
- test_net_2
- test_net_3
networkMapping:
?
test_net_1':
external: true
: ~
?
test_net_2:
external: true
: ~
?
test_net_3:
external: true
: ~
and it cause error in app 'invalid map key: map[interface {}]interface {}{"test_net_1":map[interface {}]interface {}{"external":true}}'.
I checked with double instead of single quotes and without quotes too. But no luck :(.
We can change to to associate array by replacing first and last {} with [] but the app need it as mapping rather than associate array.
Just wondering if anyone had similar problem and any solution?
many thanks in advance.

You use too many {}. This is how you should write it:
networkMapping: {'test_net_1': {'external': true}, 'test_net_2': {'external': true}, 'test_net_3': {'external': true}}

Related

Can't set document_id for deduplicating docs in Filebeat

What are you trying to do?
I have location data of some sensors, I want to make geo-spatial queries to find which sensors are in a specific area (query by polygon, bounding-box, etc). The location data (lat-lon) for these sensors may change in the future. I should be able to paste json files in ndjson format in the watched folder and overwrite the existing data with the new location data for each sensor.
I also have another filestream input for the indexing the logs of these sensors.
I went through docs for deduplication and filestream input for ndjson and followed them exactly.
Show me your configs.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: filestream
id: "log"
enabled: true
paths:
- D:\EFK\Data\Log\*.json
parsers:
- ndjson:
keys_under_root: true
add_error_key: true
fields.doctype: "log"
- type: filestream
id: "loc"
enabled: true
paths:
- D:\EFK\Data\Location\*.json
parsers:
- ndjson:
keys_under_root: true
add_error_key: true
document_id: "Id" # Not working as expected.
fields.doctype: "location"
processors:
- copy_fields:
fields:
- from: "Lat"
to: "fields.location.lat"
fail_on_error: false
ignore_missing: true
- copy_fields:
fields:
- from: "Long"
to: "fields.location.lon"
fail_on_error: false
ignore_missing: true
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
hosts: ["localhost:9200"]
index: "sensor-%{[fields.doctype]}"
setup.ilm.enabled: false
setup.template:
name: "sensor_template"
pattern: "sensor-*"
# ------------------------------ Global Processors --------------------------
processors:
- drop_fields:
fields: ["agent", "ecs", "input", "log", "host"]
What does your input file look like?
{"Id":1,"Lat":19.000000,"Long":20.00000,"key1":"value1"}
{"Id":2,"Lat":19.000000,"Long":20.00000,"key1":"value1"}
{"Id":3,"Lat":19.000000,"Long":20.00000,"key1":"value1"}
It's the 'Id' field here that I want to use for deduplicating (overwriting with new) documents.
Update 10/05/22 :
I have also tried working with:
json.document_id: "Id"
filebeat.inputs
- type: filestream
id: "loc"
enabled: true
paths:
- D:\EFK\Data\Location\*.json
json.document_id: "Id"
ndjson.document_id: "Id"
filebeat.inputs
- type: filestream
id: "loc"
enabled: true
paths:
- D:\EFK\Data\Location\*.json
ndjson.document_id: "Id"
Straight up document_id: "Id"
filebeat.inputs
- type: filestream
id: "loc"
enabled: true
paths:
- D:\EFK\Data\Location\*.json
document_id: "Id"
Trying to overwrite _id using copy_fields
processors:
- copy_fields:
fields:
- from: "Id"
to: "#metadata_id"
fail_on_error: false
ignore_missing: true
Elasticsearch config has nothing special other than disabled security. And it's all running on localhost.
Version used for Elasticsearch, Kibana and Filebeat: 8.1.3
Please do comment if you need more info :)
References:
Deduplication in Filebeat: https://www.elastic.co/guide/en/beats/filebeat/8.2/filebeat-deduplication.html#_how_can_i_avoid_duplicates
Filebeat ndjson input: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-filestream.html#_ndjson
Copy_fields in Filebeat: https://www.elastic.co/guide/en/beats/filebeat/current/copy-fields.html#copy-fields

YAML Can local variables be mixed with group-variables AND have their naming simplified?

This is my first time working with YAML, but I am running into an issue where it seems like if I want to include a variable-group (i.e., signing certificate password) with local pipeline-related variables then I cannot use the simplified naming convention where the variable's name and value can both be defined and set on the same line.
For example, wat I want is something similar to this (I made sure spacing is correct in the YAML):
variables:
solutionName: Foo.sln
projectName: Bar
buildPlatform: x64
buildConfiguration: development
major: '1'
minor: '0'
build: '0'
revision: $[counter('rev', 0)]
vhdxSize: '200'
- group: legacy-pipeline
signingCertPwd: $[variables.SigningCertificatePassword]
But, this results in a parsing error. As a result, I have to use a denser, but more bloated looking, format of:
variables:
- name: solutionName
value: Foo.sln
- name: projectName
value: Bar
- name: buildPlatform
value: x64
- name: buildConfiguration
value: development
- name: major
value: '1'
- name: minor
value: '0'
- name: build
value: '0'
- name: revision
value: $[counter('rev', 0)]
- name: vhdxSize
value: '200'
- group: legacy-pipeline
- name: signingCertPwd
value: $[variables.SigningCertificatePassword]
It seems like the simplified naming format is only available if I use it for local variables, but if I add a variable-group then the simplified format goes away. I have tried searching the web for a solution for this, but I am not able find anything useful for this. Is what I am trying to achieve possible or no? If yes, how can it be done?
Unfortunately mixing the styles is not possible, but you can work around that using templates:
# pipeline.yaml
stages:
- stage: declare_vars
variables:
- template: templates/vars.yaml
- group: my-group
- template: templates/inline-vars.yaml
parameters:
vars:
inline_var: yes!
and_more: why not
jobs:
- job:
steps:
- pwsh: |
echo 'foo=$(foo)'
echo 'bar=$(bar)'
echo 'var1=$(var1)'
echo 'inline_var=$(inline_var)'
# templates/vars.yaml
variables:
foo: bar
bar: something else
# templates/inline-vars.yaml
parameters:
- name: vars
type: object
default: {}
variables:
${{ each var in parameters.vars}}:
${{var.key}}: ${{var.value}}
templates/vars.yaml is just simply moving variables to another file.
templates/inline-vars.yaml lets you define inline variables using the denser syntax together with referencing groups, but there's additional ceremony of writing template:, parameters:, vars:.

YAML ERROR : mapping values are not allowed

I am trying to build a yaml file but I am getting mapping not allowed error.
name: n1
version: "testv1"
description: n1
icon: n1.png
roles: [postgres]
postgres:
   name: postgreSQL database
   image:
       name: "r/k/postgres/"
       version: "testv1"
       engine: docker
   compute:
       memory: 2G
       cpu:
           reserve: false
           cores: 2
   storage:
       - type: data1
         media: hdd
         path: /var/lib/postgresql/data/pgdata
size: 30G
count: 1
fixed: true
service_ports: [5432]
env:
POSTGRES_PASSWORD:
type: password
value: "postgres"
POSTGRES_DB: postgres
POSTGRES_USER: postgres
(): mapping values are not allowed in this context at line 21 column 14
I cant understand the error on line : size: 30G
Try adding double quotes to the following:
path: "/var/lib/postgresql/data/pgdata"
Also, indentation of keys in the map should be exactly same. Try seeing that the indentation is correct. So, for example, if you are using 3 spaces to indent a key, then every key should be indented with 3 spaces only.

can not use traefik with dynamic configuration file

I'm trying to learn and use traefik.
here is my docker-compose.yaml:
version: "3"
services:
traefik:
image: "traefik:v2.0"
container_name: "traefik"
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- ./traefik:/etc/traefik
- ./docker:/etc/docker
whoami:
image: "containous/whoami"
container_name: "whoami"
and here is my traefik.toml:
[entryPoints]
[entryPoints.web]
address = ":80"
[providers]
[providers.file]
filename = "/etc/docker/dynamic_conf.toml"
[providers.docker]
exposedByDefault = false
[api]
insecure = true
and this is my dynamic_conf.toml:
[http]
[http.routers]
[http.routers.whoami]
rule = "Host(`whoami.localhost`)"
entrypoints = "web"
service = "whoami"
but when i build the image and run it, I get an error:
Cannot start the provider *file.Provider: toml: cannot load TOML value of type string into a Go slice
Screenshot: traefik errors
I couldn't find out the reason, I searched and I changed
filename = "/etc/docker/dynamic_conf.toml"
to
filename = ["/etc/docker/dynamic_conf.toml"]
entryPoints is a slice, not a string.
I'm not sure if you need to change the capitalization, but you definitely need to change it to a slice, like this:
entryPoints = ["web"]
You can find an example for it on this page under Priority > Set priorities -- using the File Provider.
Also, the filename property is a string, so leave it as it was before. See this link:
filename = "/etc/docker/dynamic_conf.toml"

How to use the security group existing in horizon in heat template

I'm newbies on heat yaml template loaded by OpenStack
I've got this command which works fine :
openstack server create --image RHEL-7.4 --flavor std.cpu1ram1 --nic net-id=network-name.admin-network --security-group security-name.group-sec-default value instance-name
I tried to write this heat file with the command above :
heat_template_version: 2014-10-16
description: Simple template to deploy a single compute instance with an attached volume
resources:
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_group:
- security_group: security-name.group-sec-default
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: security-name.group-sec-default
my_volume:
type: OS::Cinder::Volume
properties:
size: 10
my_attachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: my_instance }
volume_id: { get_resource: my_volume }
mountpoint: /dev/vdb
The stack creation failed with the following message error :
openstack stack create -t my_first.yaml First_stack
openstack stack show First_stack
.../...
| stack_status_reason | Resource CREATE failed: BadRequest: resources.my_instance: Unable to find security_group with name or id 'sec_group1' (HTTP 400) (Request-ID: req-1c5d041c-2254-4e43-8785-c421319060d0)
.../...
Thanks for helping,
According to the template guide it is expecting the rules type is of list.
So, change the content of template as below for security-group:
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: [security-name.group-sec-default]
OR
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules:
- security-name.group-sec-default
After digging, I finally found what was wrong in my heat file. I had to declare my instance like this :
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_groups: [security-name.group-sec-default]
Thanks for your support

Resources