Logstash giving Exception when searching any keyword on UI - elasticsearch

Started logstash with config set as embedded=true for elasticsearch. Getting following exception:
NativeException: org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to execute phase [initial], No indices / shards to search on, requested indices are []
whenever tried to search any keyword on UI.
Currently we are trying to test with static log files(copied from some server), not ever growing log files.
Any help on this will be appreciated.

I was having the same issue and the reason was that I was using a grok filter using the same capture name twice for different patterns, e.g.
filter{
grok {
type => "foo"
keep_empty_captures => true
pattern => [
"(?<bar>/file/path/a)",
"(?<bar>/file/path/b)"
]
}
}
I replaced the second <bar> with a <qux> and the error disappeared.

Related

Parsing Syslog with Logstash grock filter isn’t working with Kibana

I have crated a very basic grok filter to parse Cisco Syslogs:
input {
udp {
port => 5140
type => syslog
}
}
filter {
grok {
match => { "message"=> "%{TIMESTAMP_ISO8601:Timestamp_Local} %{IPV4:IP_Address} %{GREEDYDATA:Event}" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
index => "ciscologs-%{+YYYY.MM.dd}"
}
}
After reloading Logstash and verifying that logs show no major issues I reloaded Kibana and refreshed indexes.
When accessing the Discovery section, I saw that the index was indeed created. But looking at the fields, they were the default ones and not the ones defined in the grok filter.
The logs received after adding the filter show the following tag in Kibana:
Before adding the filter I made sure it works using Kibana's Grok debugger.
The tag states that there was a problem with the logs parsing but at this point.
Running versions are: 7.7.1 (Kibana/Elasticsearch) and 7.13.3 (Logstash)
I'm not sure where the issue might be, so, any help would be appreciated.
I found the problem. I was trying to match the logs in the order sent by the Cisco devices and not the logs in the "message" field. Once I modified that, the filter started working as expected.

How to access ruby code variables in the output section of logstash conf

I am working to create dynamic logstash buckets based on date formulas. My objective is to be able to dynamically calculate the date of a logstash bucket based on a defined variable in the incoming log file.
For this, I am currently testing with a single .conf file that contains the input, filter (with ruby code) and output section. I am pushing the output to my elasticsearch setup. I have worked out the formulas and tested the same in regular ruby through 'irb' and the formulas are working as expected.
I am lost when it comes to be able to access a variable which is present in the filter section in the output section.
I have successfully used the following syntax in the output section to reference the year/month/date:
output {
elasticsearch {
hosts => [ "localhost:9200" ]
user => elastic
password => "bar"
index => "foo-%{+YYYY.MM.dd}"
}
}
I would try the "%{variable}" syntax

Trying to import csv data into elasticsearch and then visualise it in Kibana

I am trying to import below csv formatted data into elasticsearch
Below is my SampleCSV.csv file
Date,Amount,Type,Db/Cr,Desc
4/1/2015,10773,Car M&F,Db,Insurance
4/1/2015,900,Supporting Item,Db,Wifi Router
4/1/2015,1000,Car M&F,Db,Fuel
4/1/2015,500,Car M&F,Db,Stepni Tyre Tube
4/1/2015,770,FI,Db,FI
4/1/2015,65,FI,Db,Milk
I am using configuration as below:
input {
stdin {
type => "sample"
}
file {
path => "C:/Users/Deepak/Desktop/SampleCSV.csv"
start_position => "beginning"
} }
filter {
csv {
columns => ["Date","Amount","Type","Db/Cr","Desc"]
separator => ","
}
}
elasticsearch {
action => "index"
host => "localhost"
index => "sample"
workers => 1
}
stdout {
debug=>true
}
}
I am executing below command
C:\MyDrive\Apps\logstash-2.0.0\bin>logstash agent -f C:\MyDrive\Apps\logstash-2.
0.0\bin\MyLogsConfig.conf
io/console not supported; tty will not be manipulated
Default settings used: Filter workers: 2
Logstash startup completed
Now my problem is that when I am looking in kibana about "sample" related index I am not getting any data at all.It looks no data imported into elastic search thus kibana is not getting not getting any thing.
Do you know the reason why??
Several things to take into account. You can split debugging your problem in two parts:
First you want to make sure that logstash is picking up the file contents as you would expect it to. For that you can remove/comment the elasticsearch output. Restart logstash and add a new line of data to your SampleCSV.csv file (don't forget the newcline/CR at the end of the new line otherwise it wount be picked up). If logstash picks up the new line it should appear in your console output (because you added the stdout output filter). Don't forget that the file input remebers where it last read from a logfile and continues reading from that position (it stores this index in a special file called since_db). start_position => "beginning" only works for the first time you start logstash, on subsequent runs it will start reading from it last ended meaning you won't see any new lines in your console unless you a.) add new lines to your files or b.) manually delete the since_db file (sincedb_path => null is not working under windows, at least when I last tried).
As soon as you get that working you can start looking at the elasticsearch/kibana side of things. When looking at your config, I'd suggest you stick with the default index name pattern (logstash-XXX where XXX is something like date created, notice: DON'T USE UPPERCASE LETTERS in your index name or elasticsearch will refuse your data). You should use the default pattern because asfaik the special logstash index-mapping (e.g. raw fields,...) is registered for that name pattern only. If you change you'd have to change the mapping too.
The ELK stack is a bit confusing in the beginning but don't hesitate to ask if you have any more questions.

Logstash multiline filter fragments logs

I'm trying to use the multiline filer to combine a fairly long java exception stack trace along with the main log message. I'm using the example shown here https://gist.github.com/smougenot/3182192
Following is my code:
input{
stdin{}
}
filter{
multiline {
pattern => "(^.+Exception: .+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)"
what => "previous"
}
}
However, instead of combining the stacktrace into one event, it is automatically fragmented into multiple different logs.
I can't seem to understand why? I have tried using codec instead of filter, yet the problem exists.

ELK stack pulling in older logs

I am still newish to the ELK (elasticsearch, logstash kibana) stack and I am having some issues with making the logs that i want to show up in kibana.
What I am seeing is an older unrelated log.
I see it when i run with elastic search in embedded mode or running a standalone elastic search.
When i run it using out put to the standard out , it is the logs that i would expect, but then when i look at it on kibana its the older not even related logs.
I have tried deleting the since db file and still no help.
Is there an extra file that I may have to delete that caches the indices?
Any help would be greatly appreciated!!!
Edit: Here is my logstash conf.
input
{
file
{
path=> ["/Users/me/logs/m1/2014-05-21/consumer/bp1standalone2-0/*"]
sincedb_path => "./sincebb.xxx"
start_position => "beginning"
}
}
filter
{
grok
{
patterns_dir => "../patterns"
match => [ "message", "%{CONSUMERLOGPARSER}" ]
}
}
output
{
elasticsearch
{
embedded => true
}
}
There are no extra caches to delete, and it is the since db files which store the position in the log file which logstash has reached. However, if the option start_position is "beginning" as you have it, the since db files are ignored and the file is processed from the start.

Resources