Unable to parse logstash.yml file within /etc/logstash [ELK Stack] - elasticsearch

Good Evening,
I receive the error: unable to parse logstash.yml
My OS: lubuntu 20.04.3
Here is the uncommented line of code within the etc/logstash/logstash.yml
path.conf: "/etc/logstash/conf.d"
Within the conf.d directory, I have my logstash.conf file
I run the command using: /usr/share/logstash/bin/logstash -f, --path.settings /etc/logstash`
Here is the whole logstash.yml file: https://pastebin.com/gTk1Cuhc
I've tried:
path.conf: /etc/logstash/conf.d
and
path.conf: '/etc/logstash/conf.d'

Related

Homebrew Logstash failed to open a file under ~/Documents

I installed the Logstash via Homebrew and configured the Logstash to tail a log file under my ~/Documents folder. However, the Logstash log indicate it failed to open the file with the following error.
[2020-09-14T13:58:27,892][WARN ][filewatch.tailmode.handlers.createinitial][others][7b0d387b8f4792cb946034cfce3f23950334ccb21b9d8b1792d718ecbf4d5c3a] failed to open /Users/xxx/Documents/project/logs/all: #<IOError: Operation not permitted>, ["org/jruby/RubyIO.java:1234:in `sysopen'", "org/jruby/RubyFile.java:365:in `initialize'", "org/jruby/RubyIO.java:1156:in `open'"]
I tried to add following file into the list of Security & Privary -> Full Disk Access but still failed.
/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/bin/java
/usr/local/Cellar/logstash/7.9.1/bin/logstash
/usr/local/Cellar/logstash/7.9.1/bin/logstash-plugin
/usr/local/Cellar/logstash/7.9.1/libexec/bin/logstash
I confirmed the Logstash can access other files (e.g. /var/log/system.log)
Here is my Logstash's input configuration.
input {
file {
path => [
"/Users/xxx/Documents/project/logs/all"
]
mode => "tail"
}
}
Does anyone know why may be the cause of this failure?

rsyslog 8.34.0: could not load module '/usr/lib/rsyslog/omuxsock.so'

My project requires forwarding log using rsyslog to a socket. rsyslog provides omuxsock output module for the same. When I try to use it using standard example, I see below error.
rsyslogd: could not load module '/usr/lib/rsyslog/omuxsock.so', dlopen: Error loading shared library /usr/lib/rsyslog/omuxsock.so: No such file or directory [v8.34.0 try http://www.rsyslog.com/e/2066 ]
Could someone please help me in solving this issue?
System Info
Alpine container = v3.8
rsyslog = 8.34.0-r0
Full log :-
/ # rsyslogd -N6 | head -10
rsyslogd: version 8.34.0, config validation run (level 6), master config /etc/rsyslog.conf
rsyslogd: could not load module '/usr/lib/rsyslog/omuxsock.so', dlopen: Error loading shared library /usr/lib/rsyslog/omuxsock.so: No such file or directory [v8.34.0 try http://www.rsyslog.com/e/2066 ]
rsyslogd: invalid or yet-unknown config file command 'OMUxSockSocket' - have you forgotten to load a module? [v8.34.0 try http://www.rsyslog.com/e/3003 ]
rsyslogd: error during parsing file /etc/rsyslog.conf, on or before line 30: errors occured in file '/etc/rsyslog.conf' around line 30 [v8.34.0 try http://www.rsyslog.com/e/2207 ]
rsyslogd: error during parsing file /etc/rsyslog.d/rsyslog_stats.conf, on or before line 15: invalid character '\' in expression - is there an invalid escape sequence somewhere? [v8.34.0 try http://www.rsyslog.com/e/2207 ]

rsyslog to send data to the same server showing error on modules

My app is producing logs under
/var/log/myapp/app.log
I need to send all the logs under app.log to my syslog file /var/log/syslog.
I created a file with following content /etc/rsyslog.d
$WorkDirectory /var/spool/rsyslog
$template RFC3164fmt,"<%PRI%>%TIMESTAMP% %HOSTNAME% %syslogtag%%msg%"
# Log shipment rsyslog target servers
$ActionQueueFileName appfile
$ActionQueueSaveOnShutdown on
$ActionQueueType LinkedList
$ActionResumeRetryCount 250
*.* #10.x.x.1;RFC3164fmt
# Log files
$InputFileName /var/log/myapp/app.log
$InputFileTag app:
$InputFileStateFile state-app
$InputFileFacility local7
$InputFilePollInterval 1
$InputFilePersistStateInterval 1
$InputRunFileMonitor
10.x.x.1 is the same node where rsyslog is installed and my app is running.
once i restart rsyslog am getting following error in syslog file
invalid or yet-unknown config file command 'InputFileName' - have you forgotten to load a module? [try http://www.rsyslog.com/e/3003 ]
invalid or yet-unknown config file command 'InputFileTag' - have you forgotten to load a module? [try http://www.rsyslog.com/e/3003 ]
invalid or yet-unknown config file command 'InputFileStateFile' - have you forgotten to load a module? [try http://www.rsyslog.com/e/3003 ]
invalid or yet-unknown config file command 'InputFileFacility' - have you forgotten to load a module? [try http://www.rsyslog.com/e/3003 ]
invalid or yet-unknown config file command 'InputFilePollInterval' - have you forgotten to load a module? [try http://www.rsyslog.com/e/3003 ]
invalid or yet-unknown config file command 'InputFilePersistStateInterval' - have you forgotten to load a module? [try http://www.rsyslog.com/e/3003 ]
invalid or yet-unknown config file command 'InputRunFileMonitor' - have you forgotten to load a module? [try http://www.rsyslog.com/e/3003 ]
I also checked couple of SO links and changed my /etc/rsyslog.conf file entry to change the user . but I
#$PrivDropToUser syslog
$PrivDropToUser appuser
#$PrivDropToGroup syslog
$PrivDropToGroup appuser
I missed to load the module .
Adding the following line in the beginning resolved the issue
$ModLoad imfile

HiBench Benchmar suite error: INPUT_HDFS: unbound variable

I have installed Hadoop 2.7.1 on Ubuntu Virtual Machine. I want to execute Kmeans algorithm with HiBench, but when I execute the script prepare.sh, I have the following error:
patching args=
Parsing conf: /home/hduser/HiBench/conf/00-default-properties.conf
Parsing conf: /home/hduser/HiBench/conf/01-default-streamingbench.conf
Parsing conf: /home/hduser/HiBench/conf/10-data-scale-profile.conf
Parsing conf: /home/hduser/HiBench/conf/20-samza-common.conf
Parsing conf: /home/hduser/HiBench/conf/30-samza-workloads.conf
Parsing conf: /home/hduser/HiBench/workloads/kmeans/conf/00-kmeans-default.conf
Parsing conf: /home/hduser/HiBench/workloads/kmeans/conf/10-kmeans-userdefine.conf
Probing spark verison, may last long at first time...
probe sleep jar: /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar
Traceback (most recent call last):
File "/home/hduser/HiBench/bin/functions/load-config.py", line 556, in <module>
load_config(conf_root, workload_root, workload_folder, patching_config)
File "/home/hduser/HiBench/bin/functions/load-config.py", line 165, in load_config
check_config()
File "/home/hduser/HiBench/bin/functions/load-config.py", line 172, in check_config
assert HibenchConf.get(prop_name, None) is not None, "Mandatory configure missing: %s" % prop_name
AssertionError: Mandatory configure missing: hibench.hdfs.master
/home/hduser/HiBench/bin/functions/workload-functions.sh: line 39: .: filename argument required
.: usage: . filename [arguments]
start HadoopPrepareKmeans bench ./prepare.sh: line 25: INPUT_HDFS: unbound variable
I have set the configurations on the file 99-user_defined_properties.conf.template. The configurations are :
# Hadoop home
hibench.hadoop.home /usr/local/hadoop/bin
# Spark home
hibench.spark.home /PATH/TO/YOUR/SPARK/ROOT
# HDFS master, set according to hdfs-site.xml
hibench.hdfs.master hdfs://localhost:54310
# Spark master
# standalone mode: `spark://xxx:7077`
# YARN mode: `yarn-client`
# unset: fallback to `local[1]`
hibench.spark.master yarn-client
How can I solver this?
AssertionError: Mandatory configure missing: hibench.hdfs.master
You need to fix this configuration error.
Did you name your file properly? 99-user_defined_properties.conf.template is the template, the actual configuration file is supposedly named 99-user_defined_properties.conf.
hibench.hdfs.master sets IP address of the HDFS master node. The default value is http://127.0.0.1:8020. But if your cluster has a different address, you have to update it in hadoop.conf. Usually, you can find the right IP address in Hadoop's configuration file core-site.xml.
Even i faced the same error. I could solve it by manually setting hibench.master.hostname and hibench.slaves.hostname in hibench.conf file. Ensure that the hdfs port in hadoop.conf file is specified correctly as specified in hadoop configuration files.

Kibana 4.4.2 Environment variable in config file

Edit For anyone wanting updates, here is a link to the issue I created to hopefully get this added:
https://github.com/elastic/kibana/issues/6688
When I setup Elasticsearch I can have entries in the config file that reference environment variables (this is on RHEL - Linux):
var.name: ${ENV_VAR}/something
This works for Elasticsearch.
I want to do the same for Kibana:
logging.dest: ${ELK_HOME}/log/kibana.log
Problem is, in this case I get an error:
events.js:141
throw er; // Unhandled 'error' event
^
Error: ENOENT: no such file or directory, open '${ELK_HOME}/log/kibana.log'
at Error (native)
Finished
Any ideas how I make the Kibana kibana.yml config file use environment variables?

Resources