Bukkit yaml checker - yaml

Heres my code it says theres something wrong with one of the mapping values when I put it in the yaml checker.. Note: I took the addresses out as they are very confidential, it shouldn't affect anything.)
groups:
md_5:
- admin
disabled_commands:
- disabledcommandhere
player_limit: -1
stats: 34cce1fc-17ab-4156-bb9a-a1c06151137d
permissions:
default:
- bungeecord.command.server
- bungeecord.command.list
admin:
- bungeecord.command.alert
- bungeecord.command.end
- bungeecord.command.ip
- bungeecord.command.reload
listeners:
- max_players: -1
fallback_server: hub
host: 0.0.0.0:25577
bind_local_address: true
ping_passthrough: false
tab_list: GLOBAL_PING
default_server: hub
forced_hosts:
pvp.md-5.net: pvp
tab_size: 60
force_default_server: false
motd: '&1Another Bungee server'
query_enabled: false
query_port: 25577
timeout: 30000
connection_throttle: 4000
servers:
Hub:
address: 198.50.128.131:25565
restricted: false
motd: '&1&l>&d&l>&r&b&lWelcome to &6&l&NFooseNetwork&1&l<&d&L<'
ip_forward: false
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
ip_forward: false
online_mode: true
Factions:
address: 198.50.128.143:25565
motd: ''
ip_forward: false
online_mode: true

The problem is here:
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
The mapping that begins with the Skyblock key is indented under the online_mode key, which would make it the value of that key, but that key already has the value true.
A few lines later you have a second online_mode key—duplicate keys are not allowed, although not all parsers are strict about this—and you repeat the same error as above with Factions.
I'm not certain, but I think what you want is something like this:
servers:
Hub:
address: 198.50.128.131:25565
restricted: false
motd: '&1&l>&d&l>&r&b&lWelcome to &6&l&NFooseNetwork&1&l<&d&L<'
ip_forward: false
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
ip_forward: false
online_mode: true
Factions:
address: 198.50.128.143:25565
motd: ''
ip_forward: false
online_mode: true
Here the value of the servers key is a mapping with three keys (Hub, Skyblock and Factions), each of whose values is in turn a mapping.

Related

Can't set document_id for deduplicating docs in Filebeat

What are you trying to do?
I have location data of some sensors, I want to make geo-spatial queries to find which sensors are in a specific area (query by polygon, bounding-box, etc). The location data (lat-lon) for these sensors may change in the future. I should be able to paste json files in ndjson format in the watched folder and overwrite the existing data with the new location data for each sensor.
I also have another filestream input for the indexing the logs of these sensors.
I went through docs for deduplication and filestream input for ndjson and followed them exactly.
Show me your configs.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: filestream
id: "log"
enabled: true
paths:
- D:\EFK\Data\Log\*.json
parsers:
- ndjson:
keys_under_root: true
add_error_key: true
fields.doctype: "log"
- type: filestream
id: "loc"
enabled: true
paths:
- D:\EFK\Data\Location\*.json
parsers:
- ndjson:
keys_under_root: true
add_error_key: true
document_id: "Id" # Not working as expected.
fields.doctype: "location"
processors:
- copy_fields:
fields:
- from: "Lat"
to: "fields.location.lat"
fail_on_error: false
ignore_missing: true
- copy_fields:
fields:
- from: "Long"
to: "fields.location.lon"
fail_on_error: false
ignore_missing: true
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
hosts: ["localhost:9200"]
index: "sensor-%{[fields.doctype]}"
setup.ilm.enabled: false
setup.template:
name: "sensor_template"
pattern: "sensor-*"
# ------------------------------ Global Processors --------------------------
processors:
- drop_fields:
fields: ["agent", "ecs", "input", "log", "host"]
What does your input file look like?
{"Id":1,"Lat":19.000000,"Long":20.00000,"key1":"value1"}
{"Id":2,"Lat":19.000000,"Long":20.00000,"key1":"value1"}
{"Id":3,"Lat":19.000000,"Long":20.00000,"key1":"value1"}
It's the 'Id' field here that I want to use for deduplicating (overwriting with new) documents.
Update 10/05/22 :
I have also tried working with:
json.document_id: "Id"
filebeat.inputs
- type: filestream
id: "loc"
enabled: true
paths:
- D:\EFK\Data\Location\*.json
json.document_id: "Id"
ndjson.document_id: "Id"
filebeat.inputs
- type: filestream
id: "loc"
enabled: true
paths:
- D:\EFK\Data\Location\*.json
ndjson.document_id: "Id"
Straight up document_id: "Id"
filebeat.inputs
- type: filestream
id: "loc"
enabled: true
paths:
- D:\EFK\Data\Location\*.json
document_id: "Id"
Trying to overwrite _id using copy_fields
processors:
- copy_fields:
fields:
- from: "Id"
to: "#metadata_id"
fail_on_error: false
ignore_missing: true
Elasticsearch config has nothing special other than disabled security. And it's all running on localhost.
Version used for Elasticsearch, Kibana and Filebeat: 8.1.3
Please do comment if you need more info :)
References:
Deduplication in Filebeat: https://www.elastic.co/guide/en/beats/filebeat/8.2/filebeat-deduplication.html#_how_can_i_avoid_duplicates
Filebeat ndjson input: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-filestream.html#_ndjson
Copy_fields in Filebeat: https://www.elastic.co/guide/en/beats/filebeat/current/copy-fields.html#copy-fields

GNU Radio ZMQ Blocks REP - REQ

I am trying to connect GNU Radio to a python script using the GR ZMQ REP / REQ blocks. GR is running on a Raspberry Pi 4 on router address 192.168.1.25. The python script is on a separate computer, from which I can successfully ping 192.168.1.25. I am setting up the REQ-REP pairs on separate ports, 55555 and 55556.
Flow graph:
import pmt
import zmq
# create a REQ socket
req_address = 'tcp://192.168.1.25:55555'
req_context = zmq.Context()
req_sock = req_context.socket (zmq.REQ)
rc = req_sock.connect (req_address)
# create a REP socket
rep_address = 'tcp://192.168.1.25:55556'
rep_context = zmq.Context()
rep_sock = rep_context.socket (zmq.REP)
rc = rep_sock.connect (rep_address)
while True:
data = req_sock.recv()
print(data)
rep_sock.send (b'1')
Running this code leads to the following error:
ZMQError: Operation cannot be accomplished in current state
The error is flagged at this line:
data = req_sock.recv()
Can you comment on the cause of the error? I know there is a strict REQ-REP, REQ-REP.. relationship, but I cannot find my error.
Your current code has two problems:
You call req_socket.recv(), but then you call rep_sock.send(): that's not how a REQ/REP pair works. You only need to create one socket (the REQ socket); it connects to a remote REP socket.
When you create a REQ socket, you need to send a REQuest before you receive a REPly.
Additionally, you should only create a single ZMQ context, even if you have multiple sockets.
A functional version of your code might look like this:
import zmq
# create a REQ socket
ctx = zmq.Context()
req_sock = ctx.socket (zmq.REQ)
# connect to a remote REP sink
rep_address = 'tcp://192.168.1.25:55555'
rc = req_sock.connect(rep_address)
while True:
req_sock.send (b'1')
data = req_sock.recv()
print(data)
I tested the above code against the following GNU Radio config:
options:
parameters:
author: ''
catch_exceptions: 'True'
category: '[GRC Hier Blocks]'
cmake_opt: ''
comment: ''
copyright: ''
description: ''
gen_cmake: 'On'
gen_linking: dynamic
generate_options: qt_gui
hier_block_src_path: '.:'
id: example
max_nouts: '0'
output_language: python
placement: (0,0)
qt_qss_theme: ''
realtime_scheduling: ''
run: 'True'
run_command: '{python} -u {filename}'
run_options: prompt
sizing_mode: fixed
thread_safe_setters: ''
title: Example
window_size: (1000,1000)
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [8, 8]
rotation: 0
state: enabled
blocks:
- name: samp_rate
id: variable
parameters:
comment: ''
value: '32000'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [184, 12]
rotation: 0
state: enabled
- name: analog_sig_source_x_0
id: analog_sig_source_x
parameters:
affinity: ''
alias: ''
amp: '1'
comment: ''
freq: '1000'
maxoutbuf: '0'
minoutbuf: '0'
offset: '0'
phase: '0'
samp_rate: samp_rate
type: complex
waveform: analog.GR_COS_WAVE
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [184, 292.0]
rotation: 0
state: true
- name: blocks_throttle_0
id: blocks_throttle
parameters:
affinity: ''
alias: ''
comment: ''
ignoretag: 'True'
maxoutbuf: '0'
minoutbuf: '0'
samples_per_second: samp_rate
type: complex
vlen: '1'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [344, 140.0]
rotation: 0
state: true
- name: zeromq_rep_sink_0
id: zeromq_rep_sink
parameters:
address: tcp://0.0.0.0:55555
affinity: ''
alias: ''
comment: ''
hwm: '-1'
pass_tags: 'False'
timeout: '100'
type: complex
vlen: '1'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [504, 216.0]
rotation: 0
state: true
connections:
- [analog_sig_source_x_0, '0', blocks_throttle_0, '0']
- [blocks_throttle_0, '0', zeromq_rep_sink_0, '0']
metadata:
file_format: 1

What is indexed array in YAML?

In my yaml spring-boot application config I have
additional-properties[auto.register.schemas]: false
additional-properties[use.latest.version]: true
and it works! I haven't found this syntax in the YAML specification. What does it mean? How can it be re-written using standard YAML? Is this the same as
additional-properties:
- auto.register.schemas: false
- use.latest.version: true
?
AFAIK:
Every element (separated by a dot) has to be on its own line and tabed accordingly.
foo:
bar:
name: value
name2: value2
fez: value
So your example would be:
additional-properties:
auto:
register:
schemas: false
After experimenting and after finding this answer, I conclude that (at least in Spring application.yaml):
camel.component.kafka:
additional-properties[auto.register.schemas]: false
additional-properties[use.latest.version]: true
is equivalent to
camel.component.kafka.additional-properties:
"[auto.register.schemas]": false
"[use.latest.version]": true
and this is equivalent to
camel:
component:
kafka:
additional-properties:
"[auto.register.schemas]": false
"[use.latest.version]": true

How to generate configuration for combination of multiple environments and mutations

I'm trying to use gomplate as a generator of configuration. The problem I'm facing now is having multiple mutations and environments where the application needs to be configured in a different way. I'd like to achieve some user-friendly and readable way with the least possible repetitions in the template and source data.
The motivation behind this is to have generated source data app_config which can be used in a following gomplate as following:
feature_a={{ index (datasource "app_config").features.feature_a .Env.APP_MUTATION .Env.ENV_NAME | required }}
feature_b={{ index (datasource "app_config").features.feature_b .Env.APP_MUTATION .Env.ENV_NAME | required }}
Basically I'd like to have this source data
features:
feature_a:
~: true
feature_b:
mut_a:
~: false
dev: true
test: true
mut_b:
~: true
converted into this result (used as app_config gomplate datasource)
features:
feature_a:
mut_a:
dev: true
test: true
load: true
staging: true
prod: true
mut_b:
dev: true
test: true
load: true
staging: true
prod: true
feature_b:
mut_a:
dev: true
test: true
load: false
staging: false
prod: false
mut_b:
dev: true
test: true
load: true
staging: true
prod: true
given that datasource platform is defined as
mutations:
- mut_a
- mut_b
environments:
- dev
- test
- load
- staging
- prod
I chose to use the ~ to state that every environment or mutation that is not defined will get the value behind ~.
This should work under assumption that the lowest level is environment and the level before the lowest is mutation. Unless environments are not defined, in that case mutation level is lowest and applies for all mutations and environments. However I know this brings extra complexity, so I'm wiling to use simplified variant where mutations are always defined:
features:
feature_a:
mut_a: true
mut_b: true
feature_b:
mut_a:
~: false
dev: true
test: true
mut_b:
~: true
However, since I'm fairly new to gomplate, I'm not sure whether it is the right tool for the job.
I welcome every feedback.
After further investigation I decided that this issue will be better solved with separate tool.

FSCrawler Error while crawling E:\TestFilesToBeIndexed\subfolder: java.net.ConnectException: Connection timed out: connect

Error while crawling path\to\file_folder: java.net.ConnectException: Connection timed out: connect
I am trying to ingest the remote server files using FSCrawler into the existing index of Elasticserach(which is on my local machine) but getting above exception.
Below is the _settings.yml file of FSCrawler:
---
name: "index_in_es_onefsc"
server:
hostname: "machinename.abc.com"
port: 22
username: "username"
password: "password#20"
protocol: "ssh"
fs:
url: "E:\\TestFilesToBeIndexed"
update_rate: "15m"
excludes:
- "*/~*"
json_support: false
filename_as_id: false
add_filesize: true
remove_deleted: true
add_as_inner_object: false
store_source: false
index_content: true
attributes_support: false
raw_metadata: false
xml_support: false
index_folders: true
lang_detect: false
continue_on_error: false
ocr:
language: "eng"
enabled: true
pdf_strategy: "ocr_and_text"
follow_symlinks: false
elasticsearch:
nodes:
- url: "http://127.0.0.1:9200"
bulk_size: 100
flush_interval: "5s"
byte_size: "10mb"
The documentation says that on Windows when doing SSH from and to a Windows
machine you must use the following form:
I think that on Windows, you need to use:
name: "index_in_es_onefsc"
fs:
url: "/E:/TestFilesToBeIndexed"
server:
hostname: "machinename.abc.com"
port: 22
username: "username"
password: "password#20"
protocol: "ssh"
Note that there is a known issue when running FSCrawler from a Windows machine. This has been fixed but in case you are using an older SNAPSHOT version than the one published on June 26th, you'll most likely need to upgrade.

Resources