I am trying to configure logstash and filebeat running in kubernetes to connect and push logs from kubernetes cluster to my deployment in the elastic cloud.
I have configured the logstash.yaml file with host, username and password, please find the config below:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: ns-elastic
data:
logstash.conf: |-
input {
beats {
port => "9600"
}
}
filter {
fingerprint {
source => "message"
target => "[#metadata][fingerprint]"
method => "MURMUR3"
}
# Container logs are received with variable named index_prefix
# Since it is in json format, we can decode it via json filter plugin.
if [index_prefix] == "store-logs" {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
skip_on_invalid_json => true
}
}
}
if [index_prefix] == "ingress-" {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
skip_on_invalid_json => true
}
}
}
# do not expose index_prefix field to kibana
mutate {
# #metadata is not exposed outside of Logstash by default.
add_field => { "[#metadata][index_prefix]" => "%{index_prefix}-%{+YYYY.MM.dd}" }
# since we added index_prefix to metadata, we no longer need ["index_prefix"] field.
remove_field => ["index_prefix"]
}
}
output {
# You can uncomment this line to investigate the generated events by the logstash.
stdout { codec => rubydebug }
elasticsearch {
hosts => "https://******.es.*****.azure.elastic-cloud.com:9243"
user => "username"
password => "*****************"
document_id => "%{[#metadata][fingerprint]}"
# The events will be stored in elasticsearch under previously defined index_prefix value.
index => "%{[#metadata][index_prefix]}"
}
}
However, the logstash restarts with the below error:
[2022-06-19T17:32:31,943][INFO ][org.logstash.beats.Server][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] Starting server on port: 9600
[2022-06-19T17:32:38,154][ERROR][logstash.javapipeline ][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>9600, id=>"3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_4b2c91f6-9a6f-4e5e-9a96-5b42e20cd0d9", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.3, cipher_suites=>["TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>1>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:459)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:448)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)
Can anyone please help me understand what I am doing incorrectly here? My endgoal is to push logs from my kubernetes cluster to my deployment of elasticsearch service on Elastic Cloud. Please assist as I am unable to get enough resources on this.
The error we see in your logs says:
Error: Address already in use
Exception: Java::JavaNet::BindException
This means there is already a process that binds on port TCP/9600.
You could use netstat -plant to inspect services listening on your host. Could be another instance of logstash that was not properly shut down.
I am new to elasticsearch. I tried to configure elastisearch, Kibana , logstash with MQTT plugin. I supposed to send logs to elasticseach through logstash MQTT plugin. I installed them on Mac locally, but when starting logstash, it throws following error.
[2021-11-12T17:26:37,976][ERROR][logstash.outputs.elasticsearch][logstash_pipeline] Unable to get license information {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '400' contacting Elasticsearch at URL 'http://localhost:9200/_license'"}
my logstash configuration file islike:
input {
mqtt {
host => "localhost"
port => 1883
topic => "test"
qos => 0
certificate_path => "/Users/john/logstash-7.10.2/logstash/m2mqtt_ca.crt"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
can anybody tell, what was the issue?
I am trying to get data from Kafka topic to run into ELK-stack with Logstash but can't get the data moving.
I edited the logstash.conf to following:
input {
tcp {
port => 5000
}
kafka {
bootstrap_servers => "broker:29092"
topics => ["PLACES_ROWKEY"]
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => "elastic"
password => "changeme"
index => "from_logstash"
}
}
Im running this setup in Docker if it matters (broker is the hostname for the Kafka broker container). I restart the Logstash but cant see any new indices in elasticsearch
Im trying to read logs from rabbitmq queue from logstash and then pass it to elasticsearch. But with no success. Here is my logstash config.
input {
rabbitmq {
host => "localhost"
port => 15672
heartbeat => 30
durable => true
exchange => "logging_queue"
exchange_type => "logging_queue"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
}
stdout {}
}
But there is no index created so ofcourse I cant see any logs in Kibana
There are some messages in queue
I think the correct (default) port is 5672, as 15672 is the port of the web admin console.
input {
rabbitmq {
host => "localhost"
port => 5672 <--- change this
heartbeat => 30
durable => true
exchange => "logging_queue"
exchange_type => "logging_queue"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
}
stdout {}
}
I’m using logstash with queuing enabled.
I’ve setup logstash to inject rows from mysql via the mysql input plugin on startup. Currently this is injecting 1846 rows.
I also have a http input.
When I take down ES and restart logstash as expected I get errors
logstash_1 WARN logstash.outputs.amazones - Failed to flush outgoing
items {:outgoing_count=>1, :exception=>“Faraday::ConnectionFailed”,
:backtrace=>nil} logstash_1 ERROR logstash.outputs.amazones -
Attempted to send a bulk request to Elasticsearch configured at … I’d
expect when in this situation hitting the logstash http input would
result in an ack.
Actually the http POST does not return and the injection is not seen in logstash logs.
My logstash.yaml looks like
queue {
type: persisted
checkpoint.writes: 1
queue.max_bytes: 8gb
queue.page_capacity: 512mb
}
And my logstash.conf
input {
jdbc {
jdbc_connection_string => "${JDBC_CONNECTION_STRING}"
jdbc_user => "${JDBC_USER}"
jdbc_password => "${JDBC_PASSWORD}"
jdbc_driver_library => "/home/logstash/jdbc_driver.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "
SELECT blah blah blah
"
}
http {
host => "0.0.0.0"
port => 31311
}
}
output {
stdout { codec => json_lines }
amazon_es {
hosts => ["${AWS_ES_HOST}"]
region => "${AWS_REGION}"
aws_access_key_id => '${AWS_ACCESS_KEY_ID}'
aws_secret_access_key => '${AWS_SECRET_ACCESS_KEY}'
"index" => "${INDEX_NAME}"
"document_type" => "data"
"document_id" => "%{documentid}"
}
}
Is it possible for the http input to still ack events as I’m pretty sure the queue cannot be full as each event payload is about 850 characters?
Thanks in advance