How to check if specific index exists in EKL using logstash pipeline? - elasticsearch

I want to write a logstash pipeline to check if specific index exists or not in ES env; if yes then mark incoming event as "valid", else "invalid".
To check index validity using cURL :
curl -u elastic:elastic -I http://localhost:9200/sampletest1
valid output - HTTP/1.1 200 OK
invalid output - HTTP/1.1 400 Not Found
My logstash script:
input {
beats {
port => "5044"
}
}
filter {
#execute curl to check for index http://localhost:9200/%{process-code}
#if response has 200 then mutate with add_tags "valid". else add tag "invalid"
if "valid" in [tags] {
} else {
#delete event; prevent it from going to output section
}
}
output {
#print only valid events
stdout {
codec => rubydebug
}
}
I am stuck at 2 # lines mentioned in filter section. We cant use exec plugin in filter section !

Solved it using "http filter plugin" as suggested by Badger in comments.
filter {
json { source => "message"}
http {
url => "http://localhost:9200/%{process-code}"
verb => "HEAD"
body_format => "json"
user => "elastic"
password => "elastic"
}
if "_httprequestfailure" in [tags] {
#index not present. drop will prevent it from going to output section
drop {}
} else {
#index present
mutate { add_tag => [ "found" ]}
}
}
Note: http plugin above adds _jsonparsefailure tag in output event; to avoid it we can use tag_on_json_failure=>[].
Ref: https://discuss.elastic.co/t/http-filter-plugin-adds-jsonparsefailure-in-tag/277744

Related

Not able to see logs in the index

I have two mutate filters created one to get all the /var/log/messages to type > security and other mutate filter to get all the logs from one kind of hosts to type > host_type.
I am not able to see the /var/log/messages in the host_type index.
Here is the filters code I am using, please help me understand what's going on here. why am I not able to see /var/log/messages in my apihost index?
I have filebeat setup on the hosts to send logs to logstash.
filter {
if [source] =~ //var/log/(secure|syslog|auth.log|messages|kern.log)$/ {
mutate {
replace => { "type" => "security" }
}
}
}
filter-apihost.conf
filter {
if (([host.name] =~ /(?i)apihost-/) or ([host] =~ /(?i)apihost-/)) {
mutate {
replace => { "type" => "apihost" }
}
}
}
Actually I fixed the issue by adding clone filter to my logstash config.

grok not parsing logs

Log Sample
[2020-01-09 04:45:56] VERBOSE[20735][C-0000ccf3] pbx.c: Executing [9081228577525#from-internal:9] Macro("PJSIP/3512-00010e39", "dialout-trunk,1,081228577525,,off") in new stack
I'm trying to parse some logs,
I have tested some logs I have made on and it returning the result I need. But when I combining it with my config and run it, the logs not parsed into the index.
here is my config:
input{
beats{
port=>5044
}
}
filter
{
if [type]=="asterisk_debug"
{
if [message] =~ /^\[/
{
grok
{
match =>
{
"message" => "\[%{TIMESTAMP_ISO8601:log_timestamp}\] +(?<log_level>(?i)(?:debug|notice|warning|error|verbose|dtmf|fax|security)(?-i))\[%{INT:thread_id}\](?:\[%{DATA:call_thread_id}\])? %{DATA:module_name}\: %{GREEDYDATA:log_message}"
}
add_field => [ "received_timestamp", "%{#timestamp}"]
add_field => [ "process_name", "asterisk"]
}
if ![log_message]
{
mutate
{
add_field => {"log_message" => ""}
}
}
if [log_message] =~ /^Executing/ and [module_name] == "pbx.c"
{
grok
{
match =>
{
"log_message" => "Executing +\[%{DATA:TARGET}#%{DATA:dialplan_context}:%{INT:dialplan_priority}\] +%{DATA:asterisk_app}\(\"%{DATA:protocol}/%{DATA:Ext}-%{DATA:Channel}\",+ \"%{DATA:procedure},%{INT:trunk},%{DATA:dest},,%{DATA:mode}\"\) %{GREEDYDATA:log_message}"
}
}
}
}
}
}
output{
elasticsearch{
hosts=>"127.0.0.1:9200"
index=>"new_asterisk"
}
}
when I check it into kibana index, the index just showing raw logs.
Questions:
why my conf not parsing logs even the grok I've made successfully tested (by me).
solved
log not get into if condition
It seems like your grok-actions don't get applied at all because the data get indexed raw and no error-tags are thrown. Obviously your documents don't contain a field type with value asterisk_debug which is your condition to execute the grok-actions.
To verify this, you could implement a simple else-path that adds a field or tag indicating that the condition was not met like so:
filter{
if [type]=="asterisk_debug"{
# your grok's ...
}
else{
mutate{
add_tag => [ "no_asterisk_debug_type" ]
}
}
}

how filter {"foo":"bar", "bar": "foo"} with grok to get only foo field?

I copied
{"name":"myapp","hostname":"banana.local","pid":40161,"level":30,"msg":"hi","time":"2013-01-04T18:46:23.851Z","v":0}
from https://github.com/trentm/node-bunyan and save it as my logs.json. I am trying to import only two fields (name and msg) to ElasticSearch via LogStash. The problem is that I depend on a sort of filter that I am not able to accomplish. Well I have successfully imported such line as a single message but certainly it is not worth in my real case.
That said, how can I import only name and msg to ElasticSearch? I tested several alternatives using http://grokdebug.herokuapp.com/ to reach an useful filter with no success at all.
For instance, %{GREEDYDATA:message} will bring the entire line as an unique message but how to split it and ignore all other than name and msg fields?
At the end, I am planing to use here:
input {
file {
type => "my_type"
path => [ "/home/logs/logs.log" ]
codec => "json"
}
}
filter {
grok {
match => { "message" => "data=%{GREEDYDATA:request}"}
}
#### some extra lines here probably
}
output
{
elasticsearch {
codec => json
hosts => "http://127.0.0.1:9200"
index => "indextest"
}
stdout { codec => rubydebug }
}
I have just gone through the list of available Logstash filters. The prune filter should match your need.
Assume you have installed the prune filter, your config file should look like:
input {
file {
type => "my_type"
path => [ "/home/logs/logs.log" ]
codec => "json"
}
}
filter {
prune {
whitelist_names => [
"#timestamp",
"type",
"name",
"msg"
]
}
}
output {
elasticsearch {
codec => json
hosts => "http://127.0.0.1:9200"
index => "indextest"
}
stdout { codec => rubydebug }
}
Please be noted that you will want to keep type for Elasticsearch to index it into a correct type. #timestamp is required if you will view the data on Kibana.

Logstash unable to start when I add grep filter

I have a logstash instance deployed on my local and I am trying to get head wrapped around it. I added a simple grep filter to the logstash.conf file, but when I restart the service, it fails. And when I remove the grep statement it works fine. Here is my config. Any help would be appreciated. Thanks.
input {
kafka {
zk_connect => "localhost:9091"
topic_id => "rawlog"
reset_beginning => false
consumer_threads => 1
consumer_restart_on_error => true
consumer_restart_sleep_ms => 100
decorate_events => false
}
}
output {
elasticsearch {
bind_host => "localhost"
protocol => "http"
}
}
filter {
grep {
match => {"message"=>"hello-world"}
}
}
grep{} is deprecated in favor of conditionals and drop{}:
filter {
if [message] !~ /hello-world/ {
drop{}
}
}
If that doesn't help, post a sample of your input.

Logstash date filter not working

I have the following configuration file. But when I run this, I get the timestamp changed in the terminal but the log is not shipped to ElasticSearch.
Here is the configuration file:
input {
stdin {
type => "stdin-type"
}
}
filter {
grok {
type => "stdin-type"
patterns_dir=>["./patterns"]
pattern => "%{PARSE_ERROR}"
add_tag=>"%{type1},%{type2},%{slave},ERR_SYSTEM"
}
mutate
{
type=>"stdin-type"
replace => ["#message", "%{message}" ]
replace =>["#timestamp","2013-05-09T05:19:16.876Z"]
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch
{
}
}
On removing the replace line, the log gets shipped. Where am I going wrong?
Run logstash with the verbose flags, and then check your logstash log for any output. In verbose mode, the logstash process usually confirms if the message was sent off to ES or why it wasn't.
Your config looks clean...if the verbose flags don't give you any meaningful output, then you should check your ES setup.
Try the second 'replace' in a second mutate code block.
mutate
{
type=>"stdin-type"
replace => ["#message", "%{message}" ]
}
mutate
{
type=>"stdin-type"
replace =>["#timestamp","2013-05-09T05:19:16.876Z"]
}

Resources