I have installed ELK stack version 7.0.0 on my CentOS7 VM and I faced with an issue during Logstash service start:
[ERROR] 2019-05-13 08:21:37.359 [Converge PipelineAction::Create] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"MultiJson::ParseError", :message=>"JrJackson::ParseError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/multi_json-1.13.1/lib/multi_json/adapter.rb:20:in load'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/multi_json-1.13.1/lib/multi_json.rb:122:inload'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/avro-1.8.2/lib/avro/schema.rb:36:in parse'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-codec-avro-3.2.3-java/lib/logstash/codecs/avro.rb:69:inregister'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/base.rb:18:in initialize'", "org/logstash/plugins/PluginFactoryExt.java:255:inplugin'", "org/logstash/execution/JavaBasePipelineExt.java:50:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:ininitialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:inblock in converge_state'"]}
Please help to figure out a way to resolve this issue. Similar installation on another Centos VM is working fine.
I think you are cofiguration json format not parsing in logstash.yml file
put simple configuration like blow
input {
beats {
port => 5044
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://ip:9200"]
}
}
Related
I am trying to configure logstash and filebeat running in kubernetes to connect and push logs from kubernetes cluster to my deployment in the elastic cloud.
I have configured the logstash.yaml file with host, username and password, please find the config below:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: ns-elastic
data:
logstash.conf: |-
input {
beats {
port => "9600"
}
}
filter {
fingerprint {
source => "message"
target => "[#metadata][fingerprint]"
method => "MURMUR3"
}
# Container logs are received with variable named index_prefix
# Since it is in json format, we can decode it via json filter plugin.
if [index_prefix] == "store-logs" {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
skip_on_invalid_json => true
}
}
}
if [index_prefix] == "ingress-" {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
skip_on_invalid_json => true
}
}
}
# do not expose index_prefix field to kibana
mutate {
# #metadata is not exposed outside of Logstash by default.
add_field => { "[#metadata][index_prefix]" => "%{index_prefix}-%{+YYYY.MM.dd}" }
# since we added index_prefix to metadata, we no longer need ["index_prefix"] field.
remove_field => ["index_prefix"]
}
}
output {
# You can uncomment this line to investigate the generated events by the logstash.
stdout { codec => rubydebug }
elasticsearch {
hosts => "https://******.es.*****.azure.elastic-cloud.com:9243"
user => "username"
password => "*****************"
document_id => "%{[#metadata][fingerprint]}"
# The events will be stored in elasticsearch under previously defined index_prefix value.
index => "%{[#metadata][index_prefix]}"
}
}
However, the logstash restarts with the below error:
[2022-06-19T17:32:31,943][INFO ][org.logstash.beats.Server][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] Starting server on port: 9600
[2022-06-19T17:32:38,154][ERROR][logstash.javapipeline ][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>9600, id=>"3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_4b2c91f6-9a6f-4e5e-9a96-5b42e20cd0d9", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.3, cipher_suites=>["TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>1>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:459)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:448)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)
Can anyone please help me understand what I am doing incorrectly here? My endgoal is to push logs from my kubernetes cluster to my deployment of elasticsearch service on Elastic Cloud. Please assist as I am unable to get enough resources on this.
The error we see in your logs says:
Error: Address already in use
Exception: Java::JavaNet::BindException
This means there is already a process that binds on port TCP/9600.
You could use netstat -plant to inspect services listening on your host. Could be another instance of logstash that was not properly shut down.
I am trying to send csv data to kafka using LogStash implementing my own configuration script named test.conf.
I got this error while parsing.
Using JAVA_HOME defined java: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.262.b10-0.el7_8.x86_64
WARNING, using JAVA_HOME while Logstash distribution comes with a bundled JDK
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-05-24 19:12:08.565 [main] runner - Starting Logstash {"logstash.version"=>"7.10.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 25.262-b10 on 1.8.0_262-b10 +indy +jit [linux-x86_64]"}
[FATAL] 2021-05-24 19:12:08.616 [main] runner - An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/share/logstash/data" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:530:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:290:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:201:in `block in validate_all'", "org/jruby/RubyHash.java:1415:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:200:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:317:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:273:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:88:in `<main>'"]}
[ERROR] 2021-05-24 19:12:08.623 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
This is the command used to run logstash.
/usr/share/logstash/bin/logstash -f test.conf
Here is the config file.
input {
file {
path => "/home/data/*.csv"
start_position =>"beginning"
sincedb_path => "/dev/null"
}
}
filter {
mutate {
add_field => {
"timestamp" => "%{Date} %{Time}"
}
}
date { match => ["timestamp", "dd-MM-YYYY HH:mm:ss"]}
csv {
remove_field => ["Date", "Time"]
}
grok {
match => { "message" => [
"^%{DATE:timestamp},%{NUMBER:ab},%{NUMBER:cd},%{NUMBER:ef},%{NUMBER:gh},%{NUMBER:ij},%{NUMBER:kl},%{NUMBER:mn},%{NUMBER:op},%{NUMBER:qr},%{NUMBER:st},%{NUMBER:uv},%{NUMBER:wx},%{NUMBER:yz}$"
]
}
}
}
output {
stdout { codec => rubydebug }
if "_grokparsefailure" not in [tags] {
kafka {
codec => "json"
topic_id => "abcd1234"
bootstrap_servers => "192.16.12.119:9092"
}
}
}
Please help me with this.
First of all make sure about the the availability of your server ip with given port(192.16.12.119) with "telnet 192.16.12.119 9092 .
after that you forgot one field in in Kafka output section, Add the group_id field in your output Kafka section such as
Kafka{group_id => "35834"
topics => ["Your topic name"]
bootstrap_server => "192.16.12.199:9092"
codec => json}
If it doesn't worked again then change you bootstrap as "advertise type" look like below
bootstrap.servers => "advertised.listeners=PLAINTEXT://192.16.12.199:9092"
at the end do not metion you server or system ip in internet, It's not safe ;)
Your config is not even being loaded, you have a FATAL error when starting logstash.
[FATAL] 2021-05-24 19:12:08.616 [main] runner - An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/share/logstash/data" must be a writable directory. It is not writable.>,
The user that you are using to run logstash does not have permissions to write in this directory, it needs permission to write to the path.data directory or logstash won't start.
Your logstash.yml file also is not being loaded.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
You need first to give permissions to the user running logstash to write into the path.data, you can change the path.data in the logstash.yml file, then you can pass the path to that file in the command line.
Considering that you installed logstash using a package manager like yum or apt, your logstash.yml file will be in the directory /etc/logstash/.
So you need to run logstash this way:
/usr/share/logstash/bin/logstash -f /path/to/your/config.conf --path.settings /etc/logstash/.
In the logstash.yml you need to set path.data to a directory where the user has permissions to write.
path.data: /path/to/writable/directory
I have installed ELK stack on windows and configured Logstash to read an Apache Log file. I cant seem to see the output in Elasticsearch. I am very new to ELK stack.
Environment Setup
Elasticsearch: http://localhost:9200/
Logstash :
Kibana : http://localhost:5601/
All 3 applications above are running as a service.
I have created a file called "logstash.conf" to read apache logs in "C:\Elk\logstash\conf\logstash.conf" with the following :
input {
file {
path => "C:\Elk\apache.log"
start_position => "beginning"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
}
I then restarted my Logstash service and now wish to see if elasticsearch is indexing the content of my log. How do i go about doing this ?
try adding following lines to your logstash conf and let us know if there are any grokparsing failures...which would mean your pattern used in filter section is not correct..
output {
stdout { codec => json }
file { path => "C:/POC/output3.txt" }
}
I wrote a .conf file as in the example given in the Logstash documentation and tried to run it. Logstash started but when I gave the input it gave the error as mentioned in the title.
I am using Windows 8.1 and the .conf file in saved in the logstash-1.5.0/bin.
Here is the .conf file:
input { stdin { } }
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
Here is the screenshot of the command prompt:
Try with this, "logstash" should be the same name of your cluster in Elasticsearch.yml
output {
elasticsearch {
cluster => "logstash"
}
}
I found the error. It was because I have not installed elasticsearch before running logstash.
Thanks for trying to helping me out
Logstash is not working in my system(Windows 7).I am using Logstash-1.4.0, kibana-3.0.0, Elasticsearch-1.3.0 version installed in my system.
I created logstash.conf file in logstash-1.4.0 (Logstash-1.4.0/logstash.conf).
input {
file {
path => “C:/apache-tomcat-7.0.62/logs/*access*”
}
}
filter {
date {
match => [ “timestamp” , “dd/MMM/yyyy:HH:mm:ss Z” ]
}
}
output {
elasticsearch { host => “localhost:9205″}
}
And I run the logstash
c:\logstash-1.4.0\bin>logstash agent -f ../logstash.conf
Getting below Exception
log4j, [2015-06-09T15:24:45.342] WARN: org.elasticsearch.transport.netty: [logstash-IT-BHARADWAJ-512441] exception caught on transport layer [[id: 0x0ee1f960]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:123)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
etc……..
How to solve this problem
You cant connect to the socket, by default elasticsearch sitting on 9200 port for http and 9300 for tcp. Try change it for 9200 first, its default case.