I'm trying to push the data into elasticsearch which comes from kafka topic with Logstash but having this problem when I start my logstash
error code: A plugin had an unrecoverable error
how to fix this? The config file is below.
`input{
kafka{
bootstrap_servers =>"localhosts:9092"
topics => ["cars"]
}
}
filter{
csv {
separator =>","
columns => [ "maker", "model", "mileage", "manufacture_year", "engine_displacement", "engine_power", "body_type", "color_slug", "stk_year", "transmission", "door_count", "seat_count", "fuel_type", "date_created", "date_last_seen", "price_eur" ]
}
mutate {convert => ["mileage", "integer"] }
mutate {convert => ["price_eur", "float"] }
mutate {convert => ["engine_power", "integer"] }
mutate {convert => ["door_power", "integer"] }
mutate {convert => ["seat_count", "integer"] }
}
output{
elasticsearch {
hosts => ["localhost:9200"]
index => "cars1"
document_type=>"sold_cars"
}
stdout{}
}`
The convert mutate filter is a hash, not an array: https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-convert
Try it like this:
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["cars"]
}
}
filter {
csv {
separator => ","
columns => ["maker", "model", "mileage", "manufacture_year", "engine_displacement", "engine_power", "body_type", "color_slug", "stk_year", "transmission", "door_count", "seat_count", "fuel_type", "date_created", "date_last_seen", "price_eur"]
}
mutate {
convert => {
"mileage" => "integer"
"price_eur" => "float"
"engine_power" => "integer"
"door_power" => "integer"
"seat_count" => "integer"
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "cars1"
document_type => "sold_cars"
}
stdout {}
}
You can also use convert inside the csv filter itself, like so:
csv {
separator => ","
columns => ["maker", "model", "mileage", "manufacture_year", "engine_displacement", "engine_power", "body_type", "color_slug", "stk_year", "transmission", "door_count", "seat_count", "fuel_type", "date_created", "date_last_seen", "price_eur"]
convert => {
"mileage" => "integer"
"price_eur" => "float"
"engine_power" => "integer"
"door_power" => "integer"
"seat_count" => "integer"
}
}
Related
This is my logstash.conf file:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter {
mutate {
split => ["%{headers.request_path}", "/"]
add_field => { "index_id" => "%{headers.request_path[0]}" }
add_field => { "document_id" => "%{headers.request_path[1]}" }
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "%{index_id}"
document_id => "%{document_id}"
}
stdout {
codec => "rubydebug"
}
}
When I send a PUT request like
C:\Users\BolverkXR\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe
-XPUT 'http://127.0.0.1:31311/twitter'
I want a new index to be created with the name twitter, instead of using the ElasticSearch default.
However, Logstash crashes immediately with the following (truncated) error message:
Exception in pipelineworker, the pipeline stopped processing new
events, please check your filter configuration and restart Logstash.
org.logstash.FieldReference$IllegalSyntaxException: Invalid
FieldReference: headers.request_path[0]
I am sure I have made a syntax error somewhere, but I can't see where it is. How can I fix this?
EDIT:
The same error occurs when I change the filter segment to the following:
filter {
mutate {
split => ["%{[headers][request_path]}", "/"]
add_field => { "index_id" => "%{[headers][request_path][0]}" }
add_field => { "document_id" => "%{[headers][request_path][1]}" }
}
}
To split the field the %{foo} syntax is not used. Also you should start at position [1] of the array, because in position [0] there will be an empty string("") due to the reason that there are no characters at the left of the first separator(/). Instead, your filter section should be something like this:
filter {
mutate {
split => ["[headers][request_path]", "/"]
add_field => { "index_id" => "%{[headers][request_path][1]}" }
add_field => { "document_id" => "%{[headers][request_path][2]}" }
}
}
You can now use the value in %{index_id} and %{document_id}. I tested this using logstash 6.5.3 version and used Postman to send the 'http://127.0.0.1:31311/twitter/1' HTTP request and the output in console was as follows:
{
"message" => "",
"index_id" => "twitter",
"document_id" => "1",
"#version" => "1",
"host" => "127.0.0.1",
"#timestamp" => 2019-04-09T12:15:47.098Z,
"headers" => {
"connection" => "keep-alive",
"http_version" => "HTTP/1.1",
"http_accept" => "*/*",
"cache_control" => "no-cache",
"content_length" => "0",
"postman_token" => "cb81754f-6d1c-4e31-ac94-fde50c0fdbf8",
"accept_encoding" => "gzip, deflate",
"request_path" => [
[0] "",
[1] "twitter",
[2] "1"
],
"http_host" => "127.0.0.1:31311",
"http_user_agent" => "PostmanRuntime/7.6.1",
"request_method" => "PUT"
}
}
The output section of your configuration does not change. So, your final logstash.conf file will be something like this:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter {
mutate {
split => ["[headers][request_path]", "/"]
add_field => { "index_id" => "%{[headers][request_path][1]}" }
add_field => { "document_id" => "%{[headers][request_path][2]}" }
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "%{index_id}"
document_id => "%{document_id}"
}
stdout {
codec => "rubydebug"
}
}
Using filebeat to push nginx logs to logstash and then to elasticsearch.
Logstash filter:
filter {
if [fileset][module] == "nginx" {
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{#timestamp}" }
}
date {
match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[nginx][access][time]"
}
useragent {
source => "[nginx][access][agent]"
target => "[nginx][access][user_agent]"
remove_field => "[nginx][access][agent]"
}
geoip {
source => "[nginx][access][remote_ip]"
target => "[nginx][access][geoip]"
}
}
else if [fileset][name] == "error" {
grok {
match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
remove_field => "message"
}
mutate {
rename => { "#timestamp" => "read_timestamp" }
}
date {
match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
remove_field => "[nginx][error][time]"
}
}
}
}
There is just one file /var/log/nginx/access.log.
In kibana, I see ± half of the rows with parsed message and other half - not.
All of the rows in kibana have a tag "beats_input_codec_plain_applied".
Examples from filebeat -e
Row that works fine:
"source": "/var/log/nginx/access.log",
"offset": 5405195,
"message": "...",
"fileset": {
"module": "nginx",
"name": "access"
}
Row that doesn't work fine (no "fileset"):
"offset": 5405397,
"message": "...",
"source": "/var/log/nginx/access.log"
Any idea what could be the cause?
I'm using logstash and elasticsearch to collect tweet using the Twitter plug in. My problem is that I receive a document from twitter and I would like to make some preprocessing before indexing my document. Let's say that I have this as a document result from twitter:
{
"tweet": {
"tweetId": 1025,
"tweetContent": "Hey this is a fake document for stackoverflow #stackOverflow #elasticsearch",
"hashtags": ["stackOverflow", "elasticsearch"],
"publishedAt": "2017 23 August",
"analytics": {
"likeNumber": 400,
"shareNumber": 100,
}
},
"author":{
"authorId": 819744,
"authorAt": "the_expert",
"authorName": "John Smith",
"description": "Haha it's a fake description"
}
}
Now out of this document that twitter is sending me I would like to generate two documents:
the first one will be indexed in twitter/tweet/1025 :
# The id for this document should be the one from tweetId `"tweetId": 1025`
{
"content": "Hey this is a fake document for stackoverflow #stackOverflow #elasticsearch", # this field has been renamed
"hashtags": ["stackOverflow", "elasticsearch"],
"date": "2017/08/23", # the date has been formated
"shareNumber": 100 # This field has been flattened
}
The second one will be indexed in twitter/author/819744:
# The id for this document should be the one from authorId `"authorId": 819744 `
{
"authorAt": "the_expert",
"description": "Haha it's a fake description"
}
I have defined my output as follow:
output {
stdout { codec => dots }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
}
}
How can I process the information from twitter?
EDIT:
So my full config file should look like:
input {
twitter {
consumer_key => "consumer_key"
consumer_secret => "consumer_secret"
oauth_token => "access_token"
oauth_token_secret => "access_token_secret"
keywords => [ "random", "word"]
full_tweet => true
type => "tweet"
}
}
filter {
clone {
clones => ["author"]
}
if([type] == "tweet") {
mutate {
remove_field => ["authorId", "authorAt"]
}
} else {
mutate {
remove_field => ["tweetId", "tweetContent"]
}
}
}
output {
stdout { codec => dots }
if [type] == "tweet" {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[tweetId]}"
}
} else {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "author"
document_id => "%{[authorId]}"
}
}
}
You could use the clone filter plugin on logstash.
With a sample logstash configuration file that takes a JSON input from stdin and simply shows the output on stdout:
input {
stdin {
codec => json
type => "tweet"
}
}
filter {
mutate {
add_field => {
"tweetId" => "%{[tweet][tweetId]}"
"content" => "%{[tweet][tweetContent]}"
"date" => "%{[tweet][publishedAt]}"
"shareNumber" => "%{[tweet][analytics][shareNumber]}"
"authorId" => "%{[author][authorId]}"
"authorAt" => "%{[author][authorAt]}"
"description" => "%{[author][description]}"
}
}
date {
match => ["date", "yyyy dd MMMM"]
target => "date"
}
ruby {
code => '
event.set("hashtags", event.get("[tweet][hashtags]"))
'
}
clone {
clones => ["author"]
}
mutate {
remove_field => ["author", "tweet", "message"]
}
if([type] == "tweet") {
mutate {
remove_field => ["authorId", "authorAt", "description"]
}
} else {
mutate {
remove_field => ["tweetId", "content", "hashtags", "date", "shareNumber"]
}
}
}
output {
stdout {
codec => rubydebug
}
}
Using as input:
{"tweet": { "tweetId": 1025, "tweetContent": "Hey this is a fake document", "hashtags": ["stackOverflow", "elasticsearch"], "publishedAt": "2017 23 August","analytics": { "likeNumber": 400, "shareNumber": 100 } }, "author":{ "authorId": 819744, "authorAt": "the_expert", "authorName": "John Smith", "description": "fake description" } }
You would get these two documents:
{
"date" => 2017-08-23T00:00:00.000Z,
"hashtags" => [
[0] "stackOverflow",
[1] "elasticsearch"
],
"type" => "tweet",
"tweetId" => "1025",
"content" => "Hey this is a fake document",
"shareNumber" => "100",
"#timestamp" => 2017-08-23T20:36:53.795Z,
"#version" => "1",
"host" => "my-host"
}
{
"description" => "fake description",
"type" => "author",
"authorId" => "819744",
"#timestamp" => 2017-08-23T20:36:53.795Z,
"authorAt" => "the_expert",
"#version" => "1",
"host" => "my-host"
}
You could alternatively use a ruby script to flatten the fields, and then use rename on mutate, when necessary.
If you want elasticsearch to use authorId and tweetId, instead of default ID, you could probably configure elasticsearch output with document_id.
output {
stdout { codec => dots }
if [type] == "tweet" {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[tweetId]}"
}
} else {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[authorId]}"
}
}
}
I need to parse my Date and it gives me an error.
input {
file {
path => "/home/osboxes/ELK/logstash/data/data.csv"
start_position => "beginning"
}
}
filter {
csv {
separator => ","
columns => ["Date","Open","High","Low","Close","Volume","Adj Close"]
}
mutate {convert => ["High", "float"]}
mutate {convert => ["Open", "float"]}
mutate {convert => ["Low", "float"]}
mutate {convert => ["Close", "float"]}
mutate {convert => ["Volume", "float"]}
}
output {
elasticsearch {
action => "index"
hosts => "localhost:9200"
index => "stock"
workers => 1
}
stdout {}
}
The data.csv when I'm reading this is like this:
Date,Open,High,Low,Close,Volume,Adj Close
2015-04-02,125.03,125.56,124.19,125.32,32120700,125.32
2015-04-01,124.82,125.12,123.10,124.25,40359200,124.25
Where am I missing? Thanks in advance.
My logstash terminal only say this:
$ bin/logstash -f /home/osboxes/ELK/logstash/logstash.conf
Settings: Default pipeline workers: 2
Pipeline main started
Add a date statement to the filter:
date {
match => [ "Date", "YYYY-MM-dd" ]
}
I am trying to use the elapsed.rb filter in the ELK stack and cant seem to figure it out. I am not very familiar with grok and I believe that is where my issue lives. Can anyone help?
Example Log Files:
{
"application_name": "Application.exe",
"machine_name": "Machine1",
"user_name": "testuser",
"entry_date": "2015-03-12T18:12:23.5187552Z",
"chef_environment_name": "chefenvironment1",
"chef_logging_cookbook_version": "0.1.9",
"logging_level": "INFO",
"performance": {
"process_name": "account_search",
"process_id": "Machine1|1|635617555435187552",
"event_type": "enter"
},
"thread_name": "1",
"logger_name": "TestLogger",
"#version": "1",
"#timestamp": "2015-03-12T18:18:48.918Z",
"type": "rabbit",
"log_from": "rabbit"
}
{
"application_name": "Application.exe",
"machine_name": "Machine1",
"user_name": "testuser",
"entry_date": "2015-03-12T18:12:23.7527462Z",
"chef_environment_name": "chefenvironment1",
"chef_logging_cookbook_version": "0.1.9",
"logging_level": "INFO",
"performance": {
"process_name": "account_search",
"process_id": "Machine1|1|635617555435187552",
"event_type": "exit"
},
"thread_name": "1",
"logger_name": "TestLogger",
"#version": "1",
"#timestamp": "2015-03-12T18:18:48.920Z",
"type": "rabbit",
"log_from": "rabbit"
}
Example .conf file
input {
rabbitmq {
host => "SERVERNAME"
add_field => ["log_from", "rabbit"]
type => "rabbit"
user => "testuser"
password => "testuser"
durable => "true"
exchange => "Logging"
queue => "testqueue"
codec => "json"
exclusive => "false"
passive => "true"
}
}
filter {
grok {
match => ["message", "%{TIMESTAMP_ISO8601} START id: (?<process_id>.*)"]
add_tag => [ "taskStarted" ]
}
grok {
match => ["message", "%{TIMESTAMP_ISO8601} END id: (?<process_id>.*)"]
add_tag => [ "taskTerminated"]
}
elapsed {
start_tag => "taskStarted"
end_tag => "taskTerminated"
unique_id_field => "process_id"
timeout => 10000
new_event_on_match => false
}
}
output {
file {
codec => json { charset => "UTF-8" }
path => "test.log"
}
}
You would not need to use a grok filter because your input is already in json format. You'd need to do something like this:
if [performance][event_type] == "enter" {
mutate { add_tag => ["taskStarted"] }
} else if [performance][event_type] == "exit" {
mutate { add_tag => ["taskTerminated"] }
}
elapsed {
start_tag => "taskStarted"
end_tag => "taskTerminated"
unique_id_field => "performance.process_id"
timeout => 10000
new_event_on_match => false
}
I'm not positive on that unique_id_field -- I think it should work, but if it doesn't you could just change it to process_id only and add_field => { "process_id" => "%{[performance][process_id]}" }