Not able to see logs in the index - elasticsearch

I have two mutate filters created one to get all the /var/log/messages to type > security and other mutate filter to get all the logs from one kind of hosts to type > host_type.
I am not able to see the /var/log/messages in the host_type index.
Here is the filters code I am using, please help me understand what's going on here. why am I not able to see /var/log/messages in my apihost index?
I have filebeat setup on the hosts to send logs to logstash.
filter {
if [source] =~ //var/log/(secure|syslog|auth.log|messages|kern.log)$/ {
mutate {
replace => { "type" => "security" }
}
}
}
filter-apihost.conf
filter {
if (([host.name] =~ /(?i)apihost-/) or ([host] =~ /(?i)apihost-/)) {
mutate {
replace => { "type" => "apihost" }
}
}
}

Actually I fixed the issue by adding clone filter to my logstash config.

Related

How to check if specific index exists in EKL using logstash pipeline?

I want to write a logstash pipeline to check if specific index exists or not in ES env; if yes then mark incoming event as "valid", else "invalid".
To check index validity using cURL :
curl -u elastic:elastic -I http://localhost:9200/sampletest1
valid output - HTTP/1.1 200 OK
invalid output - HTTP/1.1 400 Not Found
My logstash script:
input {
beats {
port => "5044"
}
}
filter {
#execute curl to check for index http://localhost:9200/%{process-code}
#if response has 200 then mutate with add_tags "valid". else add tag "invalid"
if "valid" in [tags] {
} else {
#delete event; prevent it from going to output section
}
}
output {
#print only valid events
stdout {
codec => rubydebug
}
}
I am stuck at 2 # lines mentioned in filter section. We cant use exec plugin in filter section !
Solved it using "http filter plugin" as suggested by Badger in comments.
filter {
json { source => "message"}
http {
url => "http://localhost:9200/%{process-code}"
verb => "HEAD"
body_format => "json"
user => "elastic"
password => "elastic"
}
if "_httprequestfailure" in [tags] {
#index not present. drop will prevent it from going to output section
drop {}
} else {
#index present
mutate { add_tag => [ "found" ]}
}
}
Note: http plugin above adds _jsonparsefailure tag in output event; to avoid it we can use tag_on_json_failure=>[].
Ref: https://discuss.elastic.co/t/http-filter-plugin-adds-jsonparsefailure-in-tag/277744

Drop filter not working logstash

I have multiple log messages in a file which I am processing using logstash filter plugins. Then, the filtered logs are getting sent to elasticsearch.
There is one field called addID in a log message. I want to drop all the log messages which have a particular addID present. These particular addIDS are present in a ID.yml file.
Scenario: If the addID of a log message matches with any of the addIDs present in the ID.yml file, that log message should be dropped.
Could anyone help me in achieving this?
Below is my config file.
input {
file {
path => "/Users/jshaw/logs/access_logs.logs
ignore_older => 0
}
}
filter {
grok {
patterns_dir => ["/Users/jshaw/patterns"]
match => ["message", "%{TIMESTAMP:Timestamp}+{IP:ClientIP}+{URI:Uri}"]
}
kv{
field_split => "&?"
include_keys => [ "addID" ]
allow_duplicate_values => "false"
}
if [addID] in "/Users/jshaw/addID.yml" {
drop{}
}
}
output {
elasticsearch{
hosts => ["localhost:9200"]
}
}
You are using the in operator wrong. It is used to check if a value is in an array, not in a file, which are usually a bit more complicated to use.
A solution would be to use the ruby filter to open the file each time.
Or put the addId value in your configuration file, like this:
if [addID] == "addID" {
drop{}
}

Logstash float field does not been indexed

The following is the configuration of logstash. When I input log into logstash, it works well as expected. All the field can be accepted by elasticsearch and the value and type of all fields is correct. However, when I view the log in kinana, it says that the cost field is not indexed so that it can't been visualized. While all the string fields are indexed. I want to visualize my float field. Anyone know what's the problem?
input {
syslog {
facility_labels=>["local0"]
port=>515
}
stdin {}
}
filter{
grok {
overwrite => ["host", "message"]
match => { "message" => " %{BASE10NUM:cost} %{GREEDYDATA:message}" }
}
mutate {
convert => { "cost" => "float" }
}
}
output {
stdout{
codec=>rubydebug
}
elasticsearch{ }
}
Kibana doesn't autoreload a new field from Elastic Search. You need to reload it manually.
So you go in the Settings tab, select you index and reload the fields

Tagging the Logs by Logstash - Grok - ElasticSearch

Summary:
I am using Logstash - Grok and elastic search and my main aim is to First accept the logs by logstash, parse them by grok and associate tags with the messages depending on the type of the log, and then finally feed it to the Elastic server to query with Kibana.
I have already written this code but am not able to get the tags in Elastic Search.
This is my logstash confif file.
input {
stdin {
type => "stdin-type"
}
}
filter {
grok {
tags => "mytags"
pattern => "I am a %{USERNAME}"
add_tag => "mytag"
named_captures_only => true
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch {}
}
Where am I going wrong?
1) I would first start with editing your values to match the data type they represent. For example
add_tag => "mytag"
actually should have an array as it's value, not a simple string. Change that to
add_tag => ["mytag"]
as a good start. Double check all your values and verify they are of the correct type for logstash.
2) You are limiting your grok filters to messages that are already tagged with "mytags" based on the config line
tags => "mytags"
I don't see anywhere where you have added that tag ahead of time. Therefore, none of your messages will even go through your grok filter.
3) Please read the logstash docs carefully. I am rather new to the Logstash/Grok/ES/Kibana etc. world as well, but I have had very similar problems to what you have had, and all of them were solved by paying attention to what the documentation says.
You can run LogStash by hand (You may already be doing this) with /opt/logstash/bin/logstash -f $CONFIG_FILE and can check that your config file is valid with /opt/logstash/bin/logstash -f $CONFIG_FILE --configtest I bet you're already doing that though.
You may need to put your add_tag stanza into an array
grok {
...
add_tag => [ "mytag" ]
}
It could also be that what you're piping into STDIN isn't being matched in the grok pattern. If grok doesn't match is should result in _grokparsefailure being added to your tags. If you see those, it means your grok pattern isn't firing.
A better way to do this may be...
input {
stdin {
type => 'stdin'
}
}
filter {
if [type] = 'stdin' {
mutate {
add_tag => [ "mytag" ]
}
}
}
output {
stdout {
codec => 'rubydebug'
}
}
This will add a "mytag" tag to all things coming from standard in, wether they're groked or not.

Logstash date filter not working

I have the following configuration file. But when I run this, I get the timestamp changed in the terminal but the log is not shipped to ElasticSearch.
Here is the configuration file:
input {
stdin {
type => "stdin-type"
}
}
filter {
grok {
type => "stdin-type"
patterns_dir=>["./patterns"]
pattern => "%{PARSE_ERROR}"
add_tag=>"%{type1},%{type2},%{slave},ERR_SYSTEM"
}
mutate
{
type=>"stdin-type"
replace => ["#message", "%{message}" ]
replace =>["#timestamp","2013-05-09T05:19:16.876Z"]
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch
{
}
}
On removing the replace line, the log gets shipped. Where am I going wrong?
Run logstash with the verbose flags, and then check your logstash log for any output. In verbose mode, the logstash process usually confirms if the message was sent off to ES or why it wasn't.
Your config looks clean...if the verbose flags don't give you any meaningful output, then you should check your ES setup.
Try the second 'replace' in a second mutate code block.
mutate
{
type=>"stdin-type"
replace => ["#message", "%{message}" ]
}
mutate
{
type=>"stdin-type"
replace =>["#timestamp","2013-05-09T05:19:16.876Z"]
}

Resources