Stop pushing data in elasticsearch initiate by logstash "exec" plugin - elasticsearch

I am very new to elasticsearch stuck in a problem. I have made a logstash configuration file named test.conf which is as follows :-
input
{
exec
{
command => "free"interval => 1
}
}
output
{
elasticsearch
{
host => "localhost"protocol => "http"
}
}
Now I execute this config file so that it will start pushing data in elasticsearch every 1 sec by following command :-
$ /opt/logstash/bin/logstash -f test.conf
I m using kibana to display data inserted in elasticsearch.
Since the data is keep on adding into elasticsearch every second I am not getting how to stop this data insertion job. Please help me out.

Related

Timeout reached in KV filter with value entry too large

I'm trying to build a new ELK project. I'm a newbie here so not sure what I'm missing. I'm trying to move very huge logs to ELK and while doing so, its timing out in KV filter with the error "Timeout reached in KV filter with value entry too large".
My logstash is in the below format:
grok {
match => [ "message", "(?<timestamp>%{MONTHDAY:monthday} %{MONTH:month} %{YEAR:year} % {TIME:time} \[%{LOGLEVEL:loglevel}\] %{DATA:requestId} \(%{DATA:thread}\) %{JAVAFILE:className}: %{GREEDYDATA:logMessage}" ]
}
kv {
source => logMessage"
}
Is there a way, i can skip execution to go through kv filter when the logs are huge? If so, can someone guide me on how that can be done.
Thank you
I have tried multiple things but nothing seemed to work.
I solved this by using dissect.
The query was something along the lines of:
dissect{
mapping => { "message" => "%{[#metadata][timestamp] %{[#metadata][timestamp] %{[#metadata][timestamp] %{[#metadata][timestamp] %{loglevel} %{requestId} %{thread} %{classname} %{logMessage}"
}

Create a new index in elasticsearch for each log file by date

Currently
I have completed the above task by using one log file and passes data with logstash to one index in elasticsearch :
yellow open logstash-2016.10.19 5 1 1000807 0 364.8mb 364.8mb
What I actually want to do
If i have the following logs files which are named according to Year,Month and Date
MyLog-2016-10-16.log
MyLog-2016-10-17.log
MyLog-2016-10-18.log
MyLog-2016-11-05.log
MyLog-2016-11-02.log
MyLog-2016-11-03.log
I would like to tell logstash to read by Year,Month and Date and create the following indexes :
yellow open MyLog-2016-10-16.log
yellow open MyLog-2016-10-17.log
yellow open MyLog-2016-10-18.log
yellow open MyLog-2016-11-05.log
yellow open MyLog-2016-11-02.log
yellow open MyLog-2016-11-03.log
Please could I have some guidance as to how do i need to go about doing this ?
Thanks You
It is also simple as that :
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "MyLog-%{+YYYY-MM-DD}.log"
}
}
If the lines in the file contain datetime info, you should be using the date{} filter to set #timestamp from that value. If you do this, you can use the output format that #Renaud provided, "MyLog-%{+YYYY.MM.dd}".
If the lines don't contain the datetime info, you can use the input's path for your index name, e.g. "%{path}". To get just the basename of the path:
mutate {
gsub => [ "path", ".*/", "" ]
}
wont this configuration in output section be sufficient for your purpose ??
output {
elasticsearch {
embedded => false
host => localhost
port => 9200
protocol => http
cluster => 'elasticsearch'
index => "syslog-%{+YYYY.MM.dd}"
}
}

For ELK,sometimes Logstash says “no such index”, how to set automatic create index in ES while “no such index”?

I found some pb with ELK, can anyone help me?
logstash 2.4.0
elasticsearch 2.4.0
3 elasticsearch instance for cluster
some time logstash warning:
“ "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", ...”,
and it doesn't work. curl -XGET ES indices, it truly not have the index.
when this happen, I must kill -9 logstash, and start it again, then it can create a index in ES and it works ok again.
So, my question is how to set automatic create index in ES while “no such index”?
My logstash conf is:
input {
tcp {
port => 10514
codec => "json"
}
}
output {
elasticsearch {
hosts => [ "9200.xxxxxx.com:9200" ]
index => "log001-%{+YYYY.MM.dd}"
}

Issue in reading log file that contains date in it's name

I have 2 linux boxes setup in which 1 box contains one component which generates log and logstash installed in it to transfer the logs. And in other box I have redis elasticsearch and logstash. here logstash will act as logstash indexer to grok the data.
Now my problem is that in 1st box component generate new log file everyday, but only difference in log file name varies as per date.
like
counters-20151120-0.log
counters-20151121-0.log
counters-20151122-0.log
and so on, I have included below type of code in my logstash shipper conf file:
file {
path => "/opt/data/logs/counters-%{YEAR}%{MONTHNUM}%{MONTHDAY}*.log"
type => "rg_counters"
}
And in my logstash indexer, I have below type of code to catch those log files:
if [type] == "rg_counters" {
grok{
match => ["message", "%{YEAR}%{MONTHNUM}%{MONTHDAY}\s*%{HOUR}:%{MINUTE}:%{SECOND}\s*(?<counters_raw_data>[0-9\-A-Z]*)\s*(?<counters_operation_type>[\-A-Z]*)\s*%{GREEDYDATA:counters_extradata}"]
}
}
output {
elasticsearch { host => ["elastichost1","elastichost1" ] port => "9200" protocol => "http" }
stdout { codec => rubydebug }
}
Please note that this is working setup and other types log files are getting transfered and processed successfully, so there is no issue of setup.
The problem is how do I process this log file which contains date in it's file name.
Any help here?
Thanks in advance!!
Based on the comments...
Instead of trying to use regexp patterns in your path:
path => "/opt/data/logs/counters-%{YEAR}%{MONTHNUM}%{MONTHDAY}*.log"
just use glob patterns:
path => "/opt/data/logs/counters-*.log"
logstash will remember which files (inodes) that it's seen before.

Cannot load index to elasticsearch from external file, using logstash 1.4.2 on Windows 7

when trying to load a file into elastic, using logstash that is running the config file below, I get the following output msgs on elastic and no file is loaded (when input is configured to be stdin everything seems to be working just fine)
[2014-08-20 10:51:10,957][INFO ][cluster.service ] [Max] added {[logsta
sh-GURWB02038-5480-4002][dstQagpWTfGkSU5Ya-sUcQ][GURWB02038][inet[/10.203.152.13
9:9301]]{client=true, data=false},}, reason: zen-disco-receive(join from node[[l
ogstash-GURWB02038-5480-4002][dstQagpWTfGkSU5Ya-sUcQ][GURWB02038][inet[/10.203.1
52.139:9301]]{client=true, data=false}])
Logstash Config File that I used is below:-
input {
file {
path => "D:/example.log"
}
}
output {
elasticsearch {
host => "localhost"
}
}
You might be missing start_position.
Try with something like this.
input {
file {
path => "D:/example.log"
start_position => "beginning"
}
}
Also take the "first contact" restriction into account, according to the documentation.
start_position
Value can be any of: "beginning", "end"
Default value is "end"
Choose where Logstash starts initially reading files: at the beginning or at the end.
The default behavior treats files like live streams and thus starts at the end.
If you have old data you want to import, set this to ‘beginning’
This option only modifies “first contact” situations where a file is new and not seen
before. If a file has already been seen before, this option has no effect.
Hope this helps.
From all the examples it seems that the syntext is:
output {
elasticsearch {
host => localhost
}
}

Resources