logstash array of key value pairs - filter

I am using logstash and I would like to know if there is a way to handle the following:
Using the xml filter I am able to extract a properties field
<?xml version="1.0"?>
<event logger="RemoteEventReceiver1" timestamp="2016-07-21T12:39:04.0607421-05:00" level="DEBUG" thread="26" domain="/LM/W3SVC/2/ROOT-1-131135962764935573" username="TOOTHLESS\dvdp4">
<message>Test nessage</message>
<properties>
<data name="log4net:HostName" value="Toothless"/>
<data name="log4net:Customer" value="Bob"/>
</properties>
</event>
that looks like this
"properties" => [
[0] {
"data" => [
[0] {
"name" => "HostName",
"value" => "Toothless"
},
[1] {
"name" => "Customer",
"value" => "Bob"
}
]
}
]
how can I convert it to this?
“propertiesParsed” => {
“HostName” => “Toothless”,
“Customer” => “Bob”
}
* UPDATE ADDING CONFIG AND DATA FILE *
input {
file {
type => "log4net"
path => ["D:/temp/MR4SPO.log"]
start_position => "beginning"
sincedb_path => "nul"
}
}
filter
{
mutate {
# remove xml prefices in the message field
gsub => [ "message", "log4net:", "" ]
}
xml {
source => "message"
target => "log4net"
add_field => {
log4net_message => "%{[log4net][message]}"
# "[log4net][messagetest]" => [log4net][message]
# xxx => "%{[log4net][properties][0][data]}"
}
remove_field => "message"
}
# get json message from log4net
if [log4net_message] =~ "^LS:\s{" {
ruby { code => "event['log4net_message'] = event['log4net_message'][3..-1]" }
json {
source => "log4net_message"
# target => "log4net_json"
}
mutate {
add_field => { forMQ => true }
}
}
mutate {
remove_field => "log4net_message"
}
}
# output logs to console and to elasticsearch
output {
if [forMQ] {
stdout { codec => rubydebug }
}
# elasticsearch { hosts => ["localhost:9200"] }
}
* DATA FILE *
<log4net:event logger="SPMRDLAdd_InWeb.Services.RemoteEventReceiver1" timestamp="2016-07-21T12:39:03.0607421-05:00" level="DEBUG" thread="26" domain="/LM/W3SVC/2/ROOT-1-131135962764935573" username="TOOTHLESS\dvdp4"><log4net:message>My test one</log4net:message><log4net:properties><log4net:data name="log4net:HostName" value="Toothless" /></log4net:properties></log4net:event>
<log4net:event logger="SPMRDLAdd_InWeb.Services.RemoteEventReceiver1" timestamp="2016-07-21T12:39:04.0607421-05:00" level="DEBUG" thread="26" domain="/LM/W3SVC/2/ROOT-1-131135962764935573" username="TOOTHLESS\dvdp4"><log4net:message>LS: { "name" : "file123.jpg", "size" : 50 }</log4net:message><log4net:properties><log4net:data name="log4net:HostName" value="Toothless" /></log4net:properties></log4net:event>

You can add that ruby filter:
...
ruby {
code => "
event['propertiesParsed'] = {}
for value in event['log4net']['properties']
for data in value['data']
event['propertiesParsed'][data['name']] = data['value']
end
end
"
}
...

Related

How can I fully parse json into ElasticSearch?

I'm parsing a mongodb input into logstash, the config file is as follows:
input {
mongodb {
uri => "<mongouri>"
placeholder_db_dir => "<path>"
collection => "modules"
batch_size => 5000
}
}
filter {
mutate {
rename => { "_id" => "mongo_id" }
remove_field => ["host", "#version"]
}
json {
source => "message"
target => "log"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
action => "index"
index => "mongo_log_modules"
}
}
Outputs 2/3 documents from the collection into elasticsearch.
{
"mongo_title" => "user",
"log_entry" => "{\"_id\"=>BSON::ObjectId('60db49309fbbf53f5dd96619'), \"title\"=>\"user\", \"modules\"=>[{\"module\"=>\"user-dashboard\", \"description\"=>\"User Dashborad\"}, {\"module\"=>\"user-assessment\", \"description\"=>\"User assessment\"}, {\"module\"=>\"user-projects\", \"description\"=>\"User projects\"}]}",
"mongo_id" => "60db49309fbbf53f5dd96619",
"logdate" => "2021-06-29T16:24:16+00:00",
"application" => "mongo-modules",
"#timestamp" => 2021-10-02T05:08:38.091Z
}
{
"mongo_title" => "candidate",
"log_entry" => "{\"_id\"=>BSON::ObjectId('60db49519fbbf53f5dd96644'), \"title\"=>\"candidate\", \"modules\"=>[{\"module\"=>\"candidate-dashboard\", \"description\"=>\"User Dashborad\"}, {\"module\"=>\"candidate-assessment\", \"description\"=>\"User assessment\"}]}",
"mongo_id" => "60db49519fbbf53f5dd96644",
"logdate" => "2021-06-29T16:24:49+00:00",
"application" => "mongo-modules",
"#timestamp" => 2021-10-02T05:08:38.155Z
}
Seems like the output of stdout throws un-parsable code into
"log_entry"
After adding "rename" fields "modules" won't add a field.
I've tried the grok mutate filter, but after the _id %{DATA}, %{QUOTEDSTRING} and %{WORD} aren't working for me.
I've also tried updating a nested mapping into the index, didn't seem to work either
Is there anything else I can try to get the FULLY nested code into elasticsearch?
Solution is to filter with mutate
mutate { gsub => [ "log_entry", "=>", ": " ] }
mutate { gsub => [ "log_entry", "BSON::ObjectId\('([0-9a-z]+)'\)", '"\1"' ]}
json { source => "log_entry" remove_field => [ "log_entry" ] }
Outputs to stdout
"_id" => "60db49309fbbf53f5dd96619",
"title" => "user",
"modules" => [
[0] {
"module" => "user-dashboard",
"description" => "User Dashborad"
},
[1] {
"module" => "user-assessment",
"description" => "User assessment"
},
[2] {
"module" => "user-projects",
"description" => "User projects"
}
],

multiline field in csv (logstash)

I am trying to make the multiline field for csv file work in logstash.
But the multiline for a field is not working.
My log stash.conf content is:
input {
file {
type => "normal"
path => "/etc/logstash/*.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
codec => multiline {
pattern => "."
negate => true
what => "previous"
}
}
}
filter {
if [type] == "normal" {
csv {
separator => ","
columns => ["make", "model", "doors"]
}
mutate {convert => ["doors","integer"] }
}
}
output {
if [type] == "normal" {
elasticsearch {
hosts => "<put_local_ip>"
user => "<put_user>"
password => "<put_password>"
index => "cars"
document_type => "sold_cars"
}
stdout {}
}
}
.csv with multiple line (in quotes) for a field make is:
make,model,doors
mazda,mazda6,4
"mitsubishi
4000k", galant,2
honda,civic,4
After I run "logstash -f /etc/logstash/logstash.conf"
I am getting parse failure, from the logs:
{
"tags" => [
[0] "_csvparsefailure"
],
"#timestamp" => 2020-07-13T19:13:11.339Z,
"type" => "normal",
"host" => "<host_ip_greyedout>",
"message" => "\"mitsubishi",
"#version" => "1",
"path" => "/etc/logstash/cars4.csv"
}

Invalid FieldReference occurred when attempting to create index with the same name as request path value using ElasticSearch output

This is my logstash.conf file:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter {
mutate {
split => ["%{headers.request_path}", "/"]
add_field => { "index_id" => "%{headers.request_path[0]}" }
add_field => { "document_id" => "%{headers.request_path[1]}" }
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "%{index_id}"
document_id => "%{document_id}"
}
stdout {
codec => "rubydebug"
}
}
When I send a PUT request like
C:\Users\BolverkXR\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe
-XPUT 'http://127.0.0.1:31311/twitter'
I want a new index to be created with the name twitter, instead of using the ElasticSearch default.
However, Logstash crashes immediately with the following (truncated) error message:
Exception in pipelineworker, the pipeline stopped processing new
events, please check your filter configuration and restart Logstash.
org.logstash.FieldReference$IllegalSyntaxException: Invalid
FieldReference: headers.request_path[0]
I am sure I have made a syntax error somewhere, but I can't see where it is. How can I fix this?
EDIT:
The same error occurs when I change the filter segment to the following:
filter {
mutate {
split => ["%{[headers][request_path]}", "/"]
add_field => { "index_id" => "%{[headers][request_path][0]}" }
add_field => { "document_id" => "%{[headers][request_path][1]}" }
}
}
To split the field the %{foo} syntax is not used. Also you should start at position [1] of the array, because in position [0] there will be an empty string("") due to the reason that there are no characters at the left of the first separator(/). Instead, your filter section should be something like this:
filter {
mutate {
split => ["[headers][request_path]", "/"]
add_field => { "index_id" => "%{[headers][request_path][1]}" }
add_field => { "document_id" => "%{[headers][request_path][2]}" }
}
}
You can now use the value in %{index_id} and %{document_id}. I tested this using logstash 6.5.3 version and used Postman to send the 'http://127.0.0.1:31311/twitter/1' HTTP request and the output in console was as follows:
{
"message" => "",
"index_id" => "twitter",
"document_id" => "1",
"#version" => "1",
"host" => "127.0.0.1",
"#timestamp" => 2019-04-09T12:15:47.098Z,
"headers" => {
"connection" => "keep-alive",
"http_version" => "HTTP/1.1",
"http_accept" => "*/*",
"cache_control" => "no-cache",
"content_length" => "0",
"postman_token" => "cb81754f-6d1c-4e31-ac94-fde50c0fdbf8",
"accept_encoding" => "gzip, deflate",
"request_path" => [
[0] "",
[1] "twitter",
[2] "1"
],
"http_host" => "127.0.0.1:31311",
"http_user_agent" => "PostmanRuntime/7.6.1",
"request_method" => "PUT"
}
}
The output section of your configuration does not change. So, your final logstash.conf file will be something like this:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter {
mutate {
split => ["[headers][request_path]", "/"]
add_field => { "index_id" => "%{[headers][request_path][1]}" }
add_field => { "document_id" => "%{[headers][request_path][2]}" }
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "%{index_id}"
document_id => "%{document_id}"
}
stdout {
codec => "rubydebug"
}
}

How to preprocess a document before indexation?

I'm using logstash and elasticsearch to collect tweet using the Twitter plug in. My problem is that I receive a document from twitter and I would like to make some preprocessing before indexing my document. Let's say that I have this as a document result from twitter:
{
"tweet": {
"tweetId": 1025,
"tweetContent": "Hey this is a fake document for stackoverflow #stackOverflow #elasticsearch",
"hashtags": ["stackOverflow", "elasticsearch"],
"publishedAt": "2017 23 August",
"analytics": {
"likeNumber": 400,
"shareNumber": 100,
}
},
"author":{
"authorId": 819744,
"authorAt": "the_expert",
"authorName": "John Smith",
"description": "Haha it's a fake description"
}
}
Now out of this document that twitter is sending me I would like to generate two documents:
the first one will be indexed in twitter/tweet/1025 :
# The id for this document should be the one from tweetId `"tweetId": 1025`
{
"content": "Hey this is a fake document for stackoverflow #stackOverflow #elasticsearch", # this field has been renamed
"hashtags": ["stackOverflow", "elasticsearch"],
"date": "2017/08/23", # the date has been formated
"shareNumber": 100 # This field has been flattened
}
The second one will be indexed in twitter/author/819744:
# The id for this document should be the one from authorId `"authorId": 819744 `
{
"authorAt": "the_expert",
"description": "Haha it's a fake description"
}
I have defined my output as follow:
output {
stdout { codec => dots }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
}
}
How can I process the information from twitter?
EDIT:
So my full config file should look like:
input {
twitter {
consumer_key => "consumer_key"
consumer_secret => "consumer_secret"
oauth_token => "access_token"
oauth_token_secret => "access_token_secret"
keywords => [ "random", "word"]
full_tweet => true
type => "tweet"
}
}
filter {
clone {
clones => ["author"]
}
if([type] == "tweet") {
mutate {
remove_field => ["authorId", "authorAt"]
}
} else {
mutate {
remove_field => ["tweetId", "tweetContent"]
}
}
}
output {
stdout { codec => dots }
if [type] == "tweet" {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[tweetId]}"
}
} else {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "author"
document_id => "%{[authorId]}"
}
}
}
You could use the clone filter plugin on logstash.
With a sample logstash configuration file that takes a JSON input from stdin and simply shows the output on stdout:
input {
stdin {
codec => json
type => "tweet"
}
}
filter {
mutate {
add_field => {
"tweetId" => "%{[tweet][tweetId]}"
"content" => "%{[tweet][tweetContent]}"
"date" => "%{[tweet][publishedAt]}"
"shareNumber" => "%{[tweet][analytics][shareNumber]}"
"authorId" => "%{[author][authorId]}"
"authorAt" => "%{[author][authorAt]}"
"description" => "%{[author][description]}"
}
}
date {
match => ["date", "yyyy dd MMMM"]
target => "date"
}
ruby {
code => '
event.set("hashtags", event.get("[tweet][hashtags]"))
'
}
clone {
clones => ["author"]
}
mutate {
remove_field => ["author", "tweet", "message"]
}
if([type] == "tweet") {
mutate {
remove_field => ["authorId", "authorAt", "description"]
}
} else {
mutate {
remove_field => ["tweetId", "content", "hashtags", "date", "shareNumber"]
}
}
}
output {
stdout {
codec => rubydebug
}
}
Using as input:
{"tweet": { "tweetId": 1025, "tweetContent": "Hey this is a fake document", "hashtags": ["stackOverflow", "elasticsearch"], "publishedAt": "2017 23 August","analytics": { "likeNumber": 400, "shareNumber": 100 } }, "author":{ "authorId": 819744, "authorAt": "the_expert", "authorName": "John Smith", "description": "fake description" } }
You would get these two documents:
{
"date" => 2017-08-23T00:00:00.000Z,
"hashtags" => [
[0] "stackOverflow",
[1] "elasticsearch"
],
"type" => "tweet",
"tweetId" => "1025",
"content" => "Hey this is a fake document",
"shareNumber" => "100",
"#timestamp" => 2017-08-23T20:36:53.795Z,
"#version" => "1",
"host" => "my-host"
}
{
"description" => "fake description",
"type" => "author",
"authorId" => "819744",
"#timestamp" => 2017-08-23T20:36:53.795Z,
"authorAt" => "the_expert",
"#version" => "1",
"host" => "my-host"
}
You could alternatively use a ruby script to flatten the fields, and then use rename on mutate, when necessary.
If you want elasticsearch to use authorId and tweetId, instead of default ID, you could probably configure elasticsearch output with document_id.
output {
stdout { codec => dots }
if [type] == "tweet" {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[tweetId]}"
}
} else {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[authorId]}"
}
}
}

input json to logstash - config issues?

i have the following json input that i want to dump to logstash (and eventually search/dashboard in elasticsearch/kibana).
{"vulnerabilities":[
{"ip":"10.1.1.1","dns":"z.acme.com","vid":"12345"},
{"ip":"10.1.1.2","dns":"y.acme.com","vid":"12345"},
{"ip":"10.1.1.3","dns":"x.acme.com","vid":"12345"}
]}
i'm using the following logstash configuration
input {
file {
path => "/tmp/logdump/*"
type => "assets"
codec => "json"
}
}
output {
stdout { codec => rubydebug }
elasticsearch { host => localhost }
}
output
{
"message" => "{\"vulnerabilities\":[\r",
"#version" => "1",
"#timestamp" => "2014-10-30T23:41:19.788Z",
"type" => "assets",
"host" => "av12612sn00-pn9",
"path" => "/tmp/logdump/stack3.json"
}
{
"message" => "{\"ip\":\"10.1.1.30\",\"dns\":\"z.acme.com\",\"vid\":\"12345\"},\r",
"#version" => "1",
"#timestamp" => "2014-10-30T23:41:19.838Z",
"type" => "assets",
"host" => "av12612sn00-pn9",
"path" => "/tmp/logdump/stack3.json"
}
{
"message" => "{\"ip\":\"10.1.1.31\",\"dns\":\"y.acme.com\",\"vid\":\"12345\"},\r",
"#version" => "1",
"#timestamp" => "2014-10-30T23:41:19.870Z",
"type" => "shellshock",
"host" => "av1261wag2sn00-pn9",
"path" => "/tmp/logdump/stack3.json"
}
{
"ip" => "10.1.1.32",
"dns" => "x.acme.com",
"vid" => "12345",
"#version" => "1",
"#timestamp" => "2014-10-30T23:41:19.884Z",
"type" => "assets",
"host" => "av12612sn00-pn9",
"path" => "/tmp/logdump/stack3.json"
}
obviously logstash is treating each line as an event and it thinks {"vulnerabilities":[ is an event and i'm guessing the trailing commas on the 2 subsequent nodes mess up the parsing, and the last node appears coorrect. how do i tell logstash to parse the events inside the vulnerabilities array and to ignore the commas at the end of the line?
Updated: 2014-11-05
Following Magnus' recommendations, I added the json filter and it's working perfectly. However, it would not parse the last line of the json correctly without specifying start_position => "beginning" in the file input block. Any ideas why not? I know it parses bottom up by default but would anticipate the mutate/gsub would handle this smoothly?
file {
path => "/tmp/logdump/*"
type => "assets"
start_position => "beginning"
}
}
filter {
if [message] =~ /^\[?{"ip":/ {
mutate {
gsub => [
"message", "^\[{", "{",
"message", "},?\]?$", "}"
]
}
json {
source => "message"
remove_field => ["message"]
}
}
}
output {
stdout { codec => rubydebug }
elasticsearch { host => localhost }
}
You could skip the json codec and use a multiline filter to join the message into a single string that you can feed to the json filter.filter {
filter {
multiline {
pattern => '^{"vulnerabilities":\['
negate => true
what => "previous"
}
json {
source => "message"
}
}
However, this produces the following unwanted results:
{
"message" => "<omitted for brevity>",
"#version" => "1",
"#timestamp" => "2014-10-31T06:48:15.589Z",
"host" => "name-of-your-host",
"tags" => [
[0] "multiline"
],
"vulnerabilities" => [
[0] {
"ip" => "10.1.1.1",
"dns" => "z.acme.com",
"vid" => "12345"
},
[1] {
"ip" => "10.1.1.2",
"dns" => "y.acme.com",
"vid" => "12345"
},
[2] {
"ip" => "10.1.1.3",
"dns" => "x.acme.com",
"vid" => "12345"
}
]
}
Unless there's a fixed number of elements in the vulnerabilities array I don't think there's much we can do with this (without resorting to the ruby filter).
How about just applying the json filter to lines that look like what we want and drop the rest? Your question doesn't make it clear whether all of the log looks like this so this may not be so useful.
filter {
if [message] =~ /^\s+{"ip":/ {
# Remove trailing commas
mutate {
gsub => ["message", ",$", ""]
}
json {
source => "message"
remove_field => ["message"]
}
} else {
drop {}
}
}

Resources