Logstash Elastic Cloud 401 Unauthorized error - elasticsearch

Official logstash elastic cloud module
Official doc for starting with
My logstash.yml looks like:
cloud.id: "Test:testkey"
cloud.auth: "elastic:password"
With 2 spaces in front and no space at end, within ""
This is all I have in logstash.yml and nothing else,
And I am getting:
[2018-08-29T12:33:52,112][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://myserverurl:12345/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'https://myserverurl:12345/'"}
And the my_config_file_name.conf looks like:
input{jdbc{...jdbc here... This works, as I see data in windows console}}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => ["myserverurl:12345"]
index => "my_index"
# document_id => "%{brand}"
}
What I am doing is hitting bin/logstash on windows cmd,
It loads data from database that I have configured in input of conf file and then shows me error, I want to index my data from MySQL to elasticsearch on Cloud, I took 14 days trial and created a test index, for learning purpose as I later have to deploy it.
My Pipeline looks like:
- pipeline.id: my_id
path.config: "./config/conf_file_name.conf"
pipeline.workers: 1
If logs won't include senistive data, I can also provide them.
Basically I wan't to sync (schedule check) my MYSQL data with ElasticSearch on cloud i.e. AWS

The output shall be:
elasticsearch {
hosts => ["https://yourhost:yourport/"]
user => "elastic"
password => "password"
# protocol => https
# port => "yourport"
index => "test_index"
# document_id => "%{table_id}"
# - represent comments
as stated at: Configuring logstash with elastic cloud docs
The document provided while deploying app does not provide config for jdbc, jdbc as well need user and password even if defined in settings file i.e. logstash.yml

Also if you created your API key in the web UI you will not be able to get the values needed to configure Logstash. You must to use the devtool console found at /app/dev_tools#/console with something like this:
POST /_security/api_key
{
"name": "logstash"
}
of which the output is something like:
{
"id": "<id value>",
"name": "logstash",
"api_key": "<api key>",
"encoded": "<encoded api key>"
}
And in your logstash pipeline config you use the values like this:
output {
elasticsearch {
cloud_id => "<cloud id>"
api_key => "<id value>:<api key>"
data_stream => true
ssl => true
}
stdout { codec => rubydebug }
}
Note the combined "api_key" value separated by ":". Also, you can find the "cloud id" under your "Deployments" menu option.

I add the same issue in my dev environment. After scour hours on google, I understood by default, when you install Logstash, X-Pack is installed. In the doc https://www.elastic.co/guide/en/logstash/current/setup-xpack.html it is stated that
Blockquote
X-Pack is an Elastic Stack extension that provides security, alerting, monitoring, machine learning, pipeline management, and many other capabilities
Blockquote
As I don't need x-pack to run in my dev while I am streaming Elasticsearch, I had to disable it by setting ilm_enabled to false in the output of my indexation file configuration.
output {
elasticsearch {
hosts => [.. ]
ilm_enabled => false
}
}
The link bellow may help
https://discuss.opendistrocommunity.dev/t/logstash-oss-with-non-removable-x-pack/655/3

Related

How to push logs from kubernetes to elastic cloud deployment?

I am trying to configure logstash and filebeat running in kubernetes to connect and push logs from kubernetes cluster to my deployment in the elastic cloud.
I have configured the logstash.yaml file with host, username and password, please find the config below:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: ns-elastic
data:
logstash.conf: |-
input {
beats {
port => "9600"
}
}
filter {
fingerprint {
source => "message"
target => "[#metadata][fingerprint]"
method => "MURMUR3"
}
# Container logs are received with variable named index_prefix
# Since it is in json format, we can decode it via json filter plugin.
if [index_prefix] == "store-logs" {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
skip_on_invalid_json => true
}
}
}
if [index_prefix] == "ingress-" {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
skip_on_invalid_json => true
}
}
}
# do not expose index_prefix field to kibana
mutate {
# #metadata is not exposed outside of Logstash by default.
add_field => { "[#metadata][index_prefix]" => "%{index_prefix}-%{+YYYY.MM.dd}" }
# since we added index_prefix to metadata, we no longer need ["index_prefix"] field.
remove_field => ["index_prefix"]
}
}
output {
# You can uncomment this line to investigate the generated events by the logstash.
stdout { codec => rubydebug }
elasticsearch {
hosts => "https://******.es.*****.azure.elastic-cloud.com:9243"
user => "username"
password => "*****************"
document_id => "%{[#metadata][fingerprint]}"
# The events will be stored in elasticsearch under previously defined index_prefix value.
index => "%{[#metadata][index_prefix]}"
}
}
However, the logstash restarts with the below error:
[2022-06-19T17:32:31,943][INFO ][org.logstash.beats.Server][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] Starting server on port: 9600
[2022-06-19T17:32:38,154][ERROR][logstash.javapipeline ][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>9600, id=>"3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_4b2c91f6-9a6f-4e5e-9a96-5b42e20cd0d9", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.3, cipher_suites=>["TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>1>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:459)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:448)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)
Can anyone please help me understand what I am doing incorrectly here? My endgoal is to push logs from my kubernetes cluster to my deployment of elasticsearch service on Elastic Cloud. Please assist as I am unable to get enough resources on this.
The error we see in your logs says:
Error: Address already in use
Exception: Java::JavaNet::BindException
This means there is already a process that binds on port TCP/9600.
You could use netstat -plant to inspect services listening on your host. Could be another instance of logstash that was not properly shut down.

ELK - How to use different source in logstash

I have a so far running ELK installation that I want to use to analyse log files from differenct sources:
nginx-logs
auth-logs
and so on...
I am using filebeat to collect content from logfiles and sending it to logstash with this filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
- /var/nginx/example_com/logs/
output.logstash:
hosts: ["localhost:5044"]
In logstash I alread configured a grok-section, but only for nginx-logs. This was the only working tutorial I found. So this config receives content from filebeat, filters is (that's what grok is for?) and sends it to elasticsearch.
input {
beats {
port => 5044
}
}
filter {
grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{NGINXACCESS}" }
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
That's the content of the one nginx-pattern file I am referencing:
NGUSERNAME [a-zA-Z\.\#\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} (?:-|(%{WORD}.%{WORD})) %{USER:ident} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{QS:forwarder}
But I have trouble understanding how to manage different log-data sources. Because now Kibana only displays log content from /var/log, but there is no log data from my particular nginx folder.
What is it, that I am doing wrong here?
Since you are running filebeat, you already have a module available, that process nginx logs filebeat nginx module
This way, you will not need logstash to process the logs, and you only have to point the output directly to elasticsearch.
But, since you are processing multiple paths with different logs, and because elastic stack don't allow to have multiple output forms (logstash + elasticserach), you can set logstash to only process logs that do not come from nginx. This way, and using the module (that comes with sample dashboards) , your logs will do:
Filebeat -> Logstash (from input plugin to output plugin - without any filtering) -> Elasticsearch
If you really want to process the logs on your own, you are in a good path to finish. But right now, all your logs are being process by the grok pattern. So maybe the problem is with your pattern, that processes logs from nginx, and not from nginx in the same way. You can filter the logs in the filter plugin, with something like this:
#if you are using the module
filter {
if [fileset][module] == "nginx" {
}
}
if not, please check different available examples at logstash docs
Another thing you can try, it's add this to you filter. This way, if the grok fails,you will see the log in kibana, but, with the "_grok_parse_error_nginx_error" failure tag.
grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{NGINXACCESS}" }
tag_on_failure => [ "_grok_parse_error_nginx_error" ]
}

logstash not pushing logs to AWS Elasticsearch

I am trying to push my logs from logstash to elasticsearch but its failing. here is my logstash.conf file :
input {
file {
path => "D:/shweta/ELK_poc/test3.txt"
start_position => "beginning"
sincedb_path => "NUL"
ignore_older => 0
}}
output {
elasticsearch {
hosts => [ "https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com" ]
index => "testindex4-5july"
document_type => "test-file"
}
}
The ES endpoint that i have provided in hosts is open , so there should not be an access isssue, but it still gives following error:
_[2018-07-05T13:59:05,753][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com:9200/, :path=>"/"}_
_[2018-07-05T13:59:05,769][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com:9200/][Manticore::ResolutionFailure] This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server (search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com)"}_
I am stuck here. But when i downloaded ES and installed it in my machine and ran it locally , replacing hosts with: hosts => [ "localhost:9200" ] ,in output , it worked all good pushing data to local es:
I tried a lot of ways but not able to resolve the issue , can anyone please help. I don't want to give localhost but AWS ES domain endpoint. Any hints or leads will be highly appreciated
Thanks in advance
Shweta
In my opinion, you simply need to explicitly add the port 443 and it will work. I think the elasticsearch output plugin automatically uses port 9200 if no port is explicitly given.
elasticsearch {
hosts => [ "https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com:443" ]
index => "testindex4-5july"
document_type => "test-file"
}
An alternative would be to not add the port but specify ssl => true as depicted in the official AWS ES docs
elasticsearch {
hosts => [ "https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com" ]
index => "testindex4-5july"
document_type => "test-file"
ssl => true
}

logstash and x-forwarded-for on IIS

I just built an ELK server on Windows so I'm new to the process. I've read through the docs but am having trouble parsing out my IIS advanced logs, especially x-forwarded-for data as we're behind a load balancer..
My advanced logging is set up to output the data like this:
$date, $time, $s-ip, $cs-uri-stem, $cs-uri-query, $s-port, $cs-username, $c-ip, $X-Forwarded-For, $csUser-Agent, $cs-Referer, $sc-status, $sc-substatus, $sc-win32-status, $time-taken
I set up my logstash.conf like this:
input {
tcp {
host => "localhost"
type => "iis"
port => 5044
}
}
filter {
if [type] == "iis" {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp} %{IPORHOST:site} %{URIPATH:page} %{NOTSPACE:query_string} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:client_host} %{NOTSPACE:useragent} %{NOTSPACE:referer} %{GREEDYDATA:response} %{NUMBER:httpStatusCode:int} %{NUMBER:scSubstatus:int} %{NUMBER:scwin32status:int} %{NUMBER:timeTakenMS:int}"}
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "iis"
document_type => "main"
}
}
I don't think this is correct as I'm not getting data. I've scoured the docs but am still having issues and am not sure if there are other steps I need to take, like mapping the fields.
I'm currently using filebeat from one server to push data to my ELK server. I'm not sure if this is the best way as well (maybe nxlog?). We don't want to install logstash on the client machines.
Can someone lend me a hand? It would be GREATLY appreciated!!
Thanks,
George
Since you are using Filebeat then you need to use the beats input and not the tcp input. See the documentation on how to setup Logstash for Beats.
Essentially you need to replace your tcp input with:
input {
beats {
port => 5044
}
}
And inside your Filebeat configuration file, set the document_type to iis so that your filter condition will match.
filebeat:
prospectors:
- paths:
- 'C:\path\to\your\iis\logs\*.log'
document_type: iis

Not getting each error email alert from logstash 1.5.4

I have my ELK setup like below:
HOST1: Component(which generates log) + Logstash (To send logs to redis)
HOST2: Redis + Elasticsearch + Logstash ( To parse data based on grok and send it to elasticsearch on same setup)
HOST3: Redis + Elasticsearch + Logstash ( To parse data based on grok and send it to elasticsearch on same setup)
HOST4: nginx + Kibana 4
Now when I send one error log line from logstash to redis, I get double entry in Kibana 4. Like below:
Plus I didnt get any email alert from logstash, although it is configured to send alert when severity == "Erro".
this is part of logstash conf file:
output {
elasticsearch { host => ["<ELK IP>"] port => "9200" protocol => "http" }
if [severity] =~ /Erro/
{
email {
from => "someone#somedomain.com"
subject => "Error Alert"
to => "someone#somedomain.com"
via => "smtp"
htmlbody => "<h2>Error Alert1</h2><br/><br/><div
align='center'>%{message}</div>"
options => [
"smtpIporHost", "smtp.office365.com",
"port", "587",
"domain", "smtp.office365.com",
"userName", "someone#somedomain.com",
"password", "somepasswd",
"authenticationType", "login",
"starttls", "true"
]
}
}
stdout { codec => rubydebug }
}
I am using following custom grok pattern to parse log line:
ABTIMESTAMP %{YEAR}%{MONTHNUM2}%{MONTHDAY} %{USERNAME}
ABLOGLEVEL (Note|Erro|Fatl|Warn|Urgt)
ABLOG %{ABTIMESTAMP:timestamp} %{HOST:hostname} %{WORD:servername} %{INT:pid} %{INT:lwp} %{INT:thread} %{ABLOGLEVEL:severity};%{USERNAME:event}\(%{NUMBER:msgcat}/%{NUMBER:msgnum}\)%{GREEDYDATA:greedydata}
Any help here as, how to get each email alert for every error log line?
Thanks in advance!
resolved it... Actually I was having multiple conf files in logstash/conf.d folder. I removed all unnecessary files and only kept my conf file and now its working. :). Thank you Val for your help

Resources