How to push logs from kubernetes to elastic cloud deployment? - elasticsearch
I am trying to configure logstash and filebeat running in kubernetes to connect and push logs from kubernetes cluster to my deployment in the elastic cloud.
I have configured the logstash.yaml file with host, username and password, please find the config below:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: ns-elastic
data:
logstash.conf: |-
input {
beats {
port => "9600"
}
}
filter {
fingerprint {
source => "message"
target => "[#metadata][fingerprint]"
method => "MURMUR3"
}
# Container logs are received with variable named index_prefix
# Since it is in json format, we can decode it via json filter plugin.
if [index_prefix] == "store-logs" {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
skip_on_invalid_json => true
}
}
}
if [index_prefix] == "ingress-" {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
skip_on_invalid_json => true
}
}
}
# do not expose index_prefix field to kibana
mutate {
# #metadata is not exposed outside of Logstash by default.
add_field => { "[#metadata][index_prefix]" => "%{index_prefix}-%{+YYYY.MM.dd}" }
# since we added index_prefix to metadata, we no longer need ["index_prefix"] field.
remove_field => ["index_prefix"]
}
}
output {
# You can uncomment this line to investigate the generated events by the logstash.
stdout { codec => rubydebug }
elasticsearch {
hosts => "https://******.es.*****.azure.elastic-cloud.com:9243"
user => "username"
password => "*****************"
document_id => "%{[#metadata][fingerprint]}"
# The events will be stored in elasticsearch under previously defined index_prefix value.
index => "%{[#metadata][index_prefix]}"
}
}
However, the logstash restarts with the below error:
[2022-06-19T17:32:31,943][INFO ][org.logstash.beats.Server][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] Starting server on port: 9600
[2022-06-19T17:32:38,154][ERROR][logstash.javapipeline ][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>9600, id=>"3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_4b2c91f6-9a6f-4e5e-9a96-5b42e20cd0d9", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.3, cipher_suites=>["TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>1>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:459)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:448)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)
Can anyone please help me understand what I am doing incorrectly here? My endgoal is to push logs from my kubernetes cluster to my deployment of elasticsearch service on Elastic Cloud. Please assist as I am unable to get enough resources on this.
The error we see in your logs says:
Error: Address already in use
Exception: Java::JavaNet::BindException
This means there is already a process that binds on port TCP/9600.
You could use netstat -plant to inspect services listening on your host. Could be another instance of logstash that was not properly shut down.
Related
Got response code '400' contacting Elasticsearch at URL in logstash
I am new to elasticsearch. I tried to configure elastisearch, Kibana , logstash with MQTT plugin. I supposed to send logs to elasticseach through logstash MQTT plugin. I installed them on Mac locally, but when starting logstash, it throws following error. [2021-11-12T17:26:37,976][ERROR][logstash.outputs.elasticsearch][logstash_pipeline] Unable to get license information {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '400' contacting Elasticsearch at URL 'http://localhost:9200/_license'"} my logstash configuration file islike: input { mqtt { host => "localhost" port => 1883 topic => "test" qos => 0 certificate_path => "/Users/john/logstash-7.10.2/logstash/m2mqtt_ca.crt" } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } can anybody tell, what was the issue?
Logstash creating pipelines from Kafka not working
I am trying to get data from Kafka topic to run into ELK-stack with Logstash but can't get the data moving. I edited the logstash.conf to following: input { tcp { port => 5000 } kafka { bootstrap_servers => "broker:29092" topics => ["PLACES_ROWKEY"] } } ## Add your filters / logstash plugins configuration here output { elasticsearch { hosts => ["elasticsearch:9200"] user => "elastic" password => "changeme" index => "from_logstash" } } Im running this setup in Docker if it matters (broker is the hostname for the Kafka broker container). I restart the Logstash but cant see any new indices in elasticsearch
Logstash Elastic Cloud 401 Unauthorized error
Official logstash elastic cloud module Official doc for starting with My logstash.yml looks like: cloud.id: "Test:testkey" cloud.auth: "elastic:password" With 2 spaces in front and no space at end, within "" This is all I have in logstash.yml and nothing else, And I am getting: [2018-08-29T12:33:52,112][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://myserverurl:12345/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'https://myserverurl:12345/'"} And the my_config_file_name.conf looks like: input{jdbc{...jdbc here... This works, as I see data in windows console}} output { stdout { codec => json_lines } elasticsearch { hosts => ["myserverurl:12345"] index => "my_index" # document_id => "%{brand}" } What I am doing is hitting bin/logstash on windows cmd, It loads data from database that I have configured in input of conf file and then shows me error, I want to index my data from MySQL to elasticsearch on Cloud, I took 14 days trial and created a test index, for learning purpose as I later have to deploy it. My Pipeline looks like: - pipeline.id: my_id path.config: "./config/conf_file_name.conf" pipeline.workers: 1 If logs won't include senistive data, I can also provide them. Basically I wan't to sync (schedule check) my MYSQL data with ElasticSearch on cloud i.e. AWS
The output shall be: elasticsearch { hosts => ["https://yourhost:yourport/"] user => "elastic" password => "password" # protocol => https # port => "yourport" index => "test_index" # document_id => "%{table_id}" # - represent comments as stated at: Configuring logstash with elastic cloud docs The document provided while deploying app does not provide config for jdbc, jdbc as well need user and password even if defined in settings file i.e. logstash.yml
Also if you created your API key in the web UI you will not be able to get the values needed to configure Logstash. You must to use the devtool console found at /app/dev_tools#/console with something like this: POST /_security/api_key { "name": "logstash" } of which the output is something like: { "id": "<id value>", "name": "logstash", "api_key": "<api key>", "encoded": "<encoded api key>" } And in your logstash pipeline config you use the values like this: output { elasticsearch { cloud_id => "<cloud id>" api_key => "<id value>:<api key>" data_stream => true ssl => true } stdout { codec => rubydebug } } Note the combined "api_key" value separated by ":". Also, you can find the "cloud id" under your "Deployments" menu option.
I add the same issue in my dev environment. After scour hours on google, I understood by default, when you install Logstash, X-Pack is installed. In the doc https://www.elastic.co/guide/en/logstash/current/setup-xpack.html it is stated that Blockquote X-Pack is an Elastic Stack extension that provides security, alerting, monitoring, machine learning, pipeline management, and many other capabilities Blockquote As I don't need x-pack to run in my dev while I am streaming Elasticsearch, I had to disable it by setting ilm_enabled to false in the output of my indexation file configuration. output { elasticsearch { hosts => [.. ] ilm_enabled => false } } The link bellow may help https://discuss.opendistrocommunity.dev/t/logstash-oss-with-non-removable-x-pack/655/3
Where does logstash /elasticsearch write data?
In my input section of my logstash config file, I have created a configuration for reading a rabbitMQ queue. Using the RabbitMQ console, I can see logstash drain the queue. However, I have no idea what logstash is doing with the message. Is it discarding it? Is if forwarding it to elasticsearch? Here's the logstash configuration input { rabbitmq { host => "192.168.34.151" exchange => an_exchange key => a_key queue => a_queue } } output { elasticsearch { embedded => true protocol => http } } edit - removed the bogus comma from the config.
Unable to load index to elasticsearch using logstash
I'm Unable to load index to elasticsearch using logstash. The follwing are my logstash.conf settings. To me config settings seems fine. Please help if I'm missing something. Assume that Logstash & elastic search services are running fine. input { file { type => "IISLog" path => "C:/inetpub/logs/LogFiles/W3SVC1/u_ex140930.log" start_postition => "beginning" } } output { stdout { debug => true debug_format => "ruby"} elasticsearch_http { host => "localhost" port => 9200 protocol => "http" index => "iislogs2" } }
You can start with checking the following: Check the logstash log file for errors. Run the following command:telnet localhost 9200 and verify you are able to connect. Check elasticsearch log files for errors.