How to preprocess a document before indexation? - elasticsearch

I'm using logstash and elasticsearch to collect tweet using the Twitter plug in. My problem is that I receive a document from twitter and I would like to make some preprocessing before indexing my document. Let's say that I have this as a document result from twitter:
{
"tweet": {
"tweetId": 1025,
"tweetContent": "Hey this is a fake document for stackoverflow #stackOverflow #elasticsearch",
"hashtags": ["stackOverflow", "elasticsearch"],
"publishedAt": "2017 23 August",
"analytics": {
"likeNumber": 400,
"shareNumber": 100,
}
},
"author":{
"authorId": 819744,
"authorAt": "the_expert",
"authorName": "John Smith",
"description": "Haha it's a fake description"
}
}
Now out of this document that twitter is sending me I would like to generate two documents:
the first one will be indexed in twitter/tweet/1025 :
# The id for this document should be the one from tweetId `"tweetId": 1025`
{
"content": "Hey this is a fake document for stackoverflow #stackOverflow #elasticsearch", # this field has been renamed
"hashtags": ["stackOverflow", "elasticsearch"],
"date": "2017/08/23", # the date has been formated
"shareNumber": 100 # This field has been flattened
}
The second one will be indexed in twitter/author/819744:
# The id for this document should be the one from authorId `"authorId": 819744 `
{
"authorAt": "the_expert",
"description": "Haha it's a fake description"
}
I have defined my output as follow:
output {
stdout { codec => dots }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
}
}
How can I process the information from twitter?
EDIT:
So my full config file should look like:
input {
twitter {
consumer_key => "consumer_key"
consumer_secret => "consumer_secret"
oauth_token => "access_token"
oauth_token_secret => "access_token_secret"
keywords => [ "random", "word"]
full_tweet => true
type => "tweet"
}
}
filter {
clone {
clones => ["author"]
}
if([type] == "tweet") {
mutate {
remove_field => ["authorId", "authorAt"]
}
} else {
mutate {
remove_field => ["tweetId", "tweetContent"]
}
}
}
output {
stdout { codec => dots }
if [type] == "tweet" {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[tweetId]}"
}
} else {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "author"
document_id => "%{[authorId]}"
}
}
}

You could use the clone filter plugin on logstash.
With a sample logstash configuration file that takes a JSON input from stdin and simply shows the output on stdout:
input {
stdin {
codec => json
type => "tweet"
}
}
filter {
mutate {
add_field => {
"tweetId" => "%{[tweet][tweetId]}"
"content" => "%{[tweet][tweetContent]}"
"date" => "%{[tweet][publishedAt]}"
"shareNumber" => "%{[tweet][analytics][shareNumber]}"
"authorId" => "%{[author][authorId]}"
"authorAt" => "%{[author][authorAt]}"
"description" => "%{[author][description]}"
}
}
date {
match => ["date", "yyyy dd MMMM"]
target => "date"
}
ruby {
code => '
event.set("hashtags", event.get("[tweet][hashtags]"))
'
}
clone {
clones => ["author"]
}
mutate {
remove_field => ["author", "tweet", "message"]
}
if([type] == "tweet") {
mutate {
remove_field => ["authorId", "authorAt", "description"]
}
} else {
mutate {
remove_field => ["tweetId", "content", "hashtags", "date", "shareNumber"]
}
}
}
output {
stdout {
codec => rubydebug
}
}
Using as input:
{"tweet": { "tweetId": 1025, "tweetContent": "Hey this is a fake document", "hashtags": ["stackOverflow", "elasticsearch"], "publishedAt": "2017 23 August","analytics": { "likeNumber": 400, "shareNumber": 100 } }, "author":{ "authorId": 819744, "authorAt": "the_expert", "authorName": "John Smith", "description": "fake description" } }
You would get these two documents:
{
"date" => 2017-08-23T00:00:00.000Z,
"hashtags" => [
[0] "stackOverflow",
[1] "elasticsearch"
],
"type" => "tweet",
"tweetId" => "1025",
"content" => "Hey this is a fake document",
"shareNumber" => "100",
"#timestamp" => 2017-08-23T20:36:53.795Z,
"#version" => "1",
"host" => "my-host"
}
{
"description" => "fake description",
"type" => "author",
"authorId" => "819744",
"#timestamp" => 2017-08-23T20:36:53.795Z,
"authorAt" => "the_expert",
"#version" => "1",
"host" => "my-host"
}
You could alternatively use a ruby script to flatten the fields, and then use rename on mutate, when necessary.
If you want elasticsearch to use authorId and tweetId, instead of default ID, you could probably configure elasticsearch output with document_id.
output {
stdout { codec => dots }
if [type] == "tweet" {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[tweetId]}"
}
} else {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "twitter"
document_type => "tweet"
document_id => "%{[authorId]}"
}
}
}

Related

How can I fully parse json into ElasticSearch?

I'm parsing a mongodb input into logstash, the config file is as follows:
input {
mongodb {
uri => "<mongouri>"
placeholder_db_dir => "<path>"
collection => "modules"
batch_size => 5000
}
}
filter {
mutate {
rename => { "_id" => "mongo_id" }
remove_field => ["host", "#version"]
}
json {
source => "message"
target => "log"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
action => "index"
index => "mongo_log_modules"
}
}
Outputs 2/3 documents from the collection into elasticsearch.
{
"mongo_title" => "user",
"log_entry" => "{\"_id\"=>BSON::ObjectId('60db49309fbbf53f5dd96619'), \"title\"=>\"user\", \"modules\"=>[{\"module\"=>\"user-dashboard\", \"description\"=>\"User Dashborad\"}, {\"module\"=>\"user-assessment\", \"description\"=>\"User assessment\"}, {\"module\"=>\"user-projects\", \"description\"=>\"User projects\"}]}",
"mongo_id" => "60db49309fbbf53f5dd96619",
"logdate" => "2021-06-29T16:24:16+00:00",
"application" => "mongo-modules",
"#timestamp" => 2021-10-02T05:08:38.091Z
}
{
"mongo_title" => "candidate",
"log_entry" => "{\"_id\"=>BSON::ObjectId('60db49519fbbf53f5dd96644'), \"title\"=>\"candidate\", \"modules\"=>[{\"module\"=>\"candidate-dashboard\", \"description\"=>\"User Dashborad\"}, {\"module\"=>\"candidate-assessment\", \"description\"=>\"User assessment\"}]}",
"mongo_id" => "60db49519fbbf53f5dd96644",
"logdate" => "2021-06-29T16:24:49+00:00",
"application" => "mongo-modules",
"#timestamp" => 2021-10-02T05:08:38.155Z
}
Seems like the output of stdout throws un-parsable code into
"log_entry"
After adding "rename" fields "modules" won't add a field.
I've tried the grok mutate filter, but after the _id %{DATA}, %{QUOTEDSTRING} and %{WORD} aren't working for me.
I've also tried updating a nested mapping into the index, didn't seem to work either
Is there anything else I can try to get the FULLY nested code into elasticsearch?
Solution is to filter with mutate
mutate { gsub => [ "log_entry", "=>", ": " ] }
mutate { gsub => [ "log_entry", "BSON::ObjectId\('([0-9a-z]+)'\)", '"\1"' ]}
json { source => "log_entry" remove_field => [ "log_entry" ] }
Outputs to stdout
"_id" => "60db49309fbbf53f5dd96619",
"title" => "user",
"modules" => [
[0] {
"module" => "user-dashboard",
"description" => "User Dashborad"
},
[1] {
"module" => "user-assessment",
"description" => "User assessment"
},
[2] {
"module" => "user-projects",
"description" => "User projects"
}
],

Invalid FieldReference occurred when attempting to create index with the same name as request path value using ElasticSearch output

This is my logstash.conf file:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter {
mutate {
split => ["%{headers.request_path}", "/"]
add_field => { "index_id" => "%{headers.request_path[0]}" }
add_field => { "document_id" => "%{headers.request_path[1]}" }
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "%{index_id}"
document_id => "%{document_id}"
}
stdout {
codec => "rubydebug"
}
}
When I send a PUT request like
C:\Users\BolverkXR\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe
-XPUT 'http://127.0.0.1:31311/twitter'
I want a new index to be created with the name twitter, instead of using the ElasticSearch default.
However, Logstash crashes immediately with the following (truncated) error message:
Exception in pipelineworker, the pipeline stopped processing new
events, please check your filter configuration and restart Logstash.
org.logstash.FieldReference$IllegalSyntaxException: Invalid
FieldReference: headers.request_path[0]
I am sure I have made a syntax error somewhere, but I can't see where it is. How can I fix this?
EDIT:
The same error occurs when I change the filter segment to the following:
filter {
mutate {
split => ["%{[headers][request_path]}", "/"]
add_field => { "index_id" => "%{[headers][request_path][0]}" }
add_field => { "document_id" => "%{[headers][request_path][1]}" }
}
}
To split the field the %{foo} syntax is not used. Also you should start at position [1] of the array, because in position [0] there will be an empty string("") due to the reason that there are no characters at the left of the first separator(/). Instead, your filter section should be something like this:
filter {
mutate {
split => ["[headers][request_path]", "/"]
add_field => { "index_id" => "%{[headers][request_path][1]}" }
add_field => { "document_id" => "%{[headers][request_path][2]}" }
}
}
You can now use the value in %{index_id} and %{document_id}. I tested this using logstash 6.5.3 version and used Postman to send the 'http://127.0.0.1:31311/twitter/1' HTTP request and the output in console was as follows:
{
"message" => "",
"index_id" => "twitter",
"document_id" => "1",
"#version" => "1",
"host" => "127.0.0.1",
"#timestamp" => 2019-04-09T12:15:47.098Z,
"headers" => {
"connection" => "keep-alive",
"http_version" => "HTTP/1.1",
"http_accept" => "*/*",
"cache_control" => "no-cache",
"content_length" => "0",
"postman_token" => "cb81754f-6d1c-4e31-ac94-fde50c0fdbf8",
"accept_encoding" => "gzip, deflate",
"request_path" => [
[0] "",
[1] "twitter",
[2] "1"
],
"http_host" => "127.0.0.1:31311",
"http_user_agent" => "PostmanRuntime/7.6.1",
"request_method" => "PUT"
}
}
The output section of your configuration does not change. So, your final logstash.conf file will be something like this:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter {
mutate {
split => ["[headers][request_path]", "/"]
add_field => { "index_id" => "%{[headers][request_path][1]}" }
add_field => { "document_id" => "%{[headers][request_path][2]}" }
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "%{index_id}"
document_id => "%{document_id}"
}
stdout {
codec => "rubydebug"
}
}

geoip.location is defined as an object in mapping [doc] but this name is already used for a field in other types

I'm getting this error:
Could not index event to Elasticsearch. {:status=>400,
:action=>["index", {:_id=>nil, :_index=>"nginx-access-2018-06-15",
:_type=>"doc", :_routing=>nil}, #],
:response=>{"index"=>{"_index"=>"nginx-access-2018-06-15",
"_type"=>"doc", "_id"=>"jo-rfGQBDK_ao1ZhmI8B", "status"=>400,
"error"=>{"type"=>"illegal_argument_exception",
"reason"=>"[geoip.location] is defined as an object in mapping [doc]
but this name is already used for a field in other types"}}}}
I'm getting the above error but don't understand why, this is loading into a brand new ES instance with no data. This is the first record that is inserted. Why am I getting this error? Here is the config:
input {
file {
type => "nginx-access"
start_position => "beginning"
path => [ "/var/log/nginx-archived/access.log.small"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
if [type] == "nginx-access" {
grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{NGINX_ACCESS}" }
remove_tag => ["_grokparsefailure"]
}
geoip {
source => "visitor_ip"
}
date {
# 11/Jun/2018:06:23:45 +0000
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
target => "#request_time"
}
if "_grokparsefailure" not in [tags] {
ruby {
code => "
thetime = event.get('#request_time').time
event.set('index_date', 'nginx-access-' + thetime.strftime('%Y-%m-%d'))
"
}
}
if "_grokparsefailure" in [tags] {
ruby {
code => "
event.set('index_date', 'nginx-access-error')
"
}
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "%{index_date}"
template => "/etc/logstash/templates/nginx-access.json"
template_overwrite => true
manage_template => true
template_name => "nginx-access"
}
stdout { }
}
Here's a sample record:
{
"method" => "GET",
"#version" => "1",
"geoip" => {
"continent_code" => "AS",
"latitude" => 39.9289,
"country_name" => "China",
"ip" => "220.181.108.103",
"location" => {
"lon" => 116.3883,
"lat" => 39.9289
},
"region_code" => "11",
"region_name" => "Beijing",
"longitude" => 116.3883,
"timezone" => "Asia/Shanghai",
"city_name" => "Beijing",
"country_code2" => "CN",
"country_code3" => "CN"
},
"index_date" => "nginx-access-2018-06-15",
"ignore" => "\"-\"",
"bytes" => "2723",
"request" => "/wp-login.php",
"#request_time" => 2018-06-15T06:29:40.000Z,
"message" => "220.181.108.103 - - [15/Jun/2018:06:29:40 +0000] \"GET /wp-login.php HTTP/1.1\" 200 2723 \"-\" \"Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)\"",
"path" => "/var/log/nginx-archived/access.log.small",
"#timestamp" => 2018-07-09T01:32:56.952Z,
"host" => "ab1526efddec",
"visitor_ip" => "220.181.108.103",
"timestamp" => "15/Jun/2018:06:29:40 +0000",
"response" => "200",
"referrer" => "\"Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)\"",
"httpversion" => "1.1",
"type" => "nginx-access"
}
Figured out the answer, based on this:
https://www.elastic.co/guide/en/elasticsearch/reference/6.x/removal-of-types.html#_schedule_for_removal_of_mapping_types
The basic problem is that for each Elasticsearch index, each field must be the same type, even if the records are different types.
That is, if I have a person { "status": "A" } stored as text I cannot have a record for a car { "status": 23 } stored as a number in the same index. Based on the info in the link above, I'm storing one "type" per index.
My output section for Logstash looks like this:
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "%{index_date}"
# Can test loading this with:
# curl -XPUT -H 'Content-Type: application/json' -d#/docker-elk/logstash/templates/nginx-access.json http://localhost:9200/_template/nginx-access
template => "/etc/logstash/templates/nginx-access.json"
template_overwrite => true
manage_template => true
template_name => "nginx-access"
}
stdout { }
}
My template looks like this:
{
"index_patterns": ["nginx-access*"],
"settings": {
},
"mappings": {
"doc": {
"_source": {
"enabled": true
},
"properties": {
"type" : { "type": "keyword" },
"response_time": { "type": "float" },
"geoip" : {
"properties" : {
"location": {
"type": "geo_point"
}
}
}
}
}
}
}
I'm also using the one type per index pattern described in the link above.

Show location points in a tile map with kibi

I'm using logstash 2.3.1, elasticsearch 2.3.1 and kibi 0.3.2. I have problems visualizing locations in a map with kibi.
I have the following configuration in logstash:
input {
file {
path => "/opt/logstash-2.3.1/logTest/Dades.csv"
type => "Dades"
start_position => "beginning"
}
}
filter {
csv {
columns => ["c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10", "c11", "c12", "c13", "c14", "c15", "c16", "c17", "c18", "c19", "c20", "c21", "c22", "c23"]
separator => ";"
}
ruby {
code => "
temp = event['c17']
event['c17'] = temp[0..1].to_f+ (temp[2..8].to_f/60)
temp = event['c19']
event['c19'] = temp[0..2].to_f+ (temp[3..8].to_f/60)
"
}
mutate {
convert => {
"c3" => "float"
"c5" => "float"
"c7" => "float"
"c9" => "float"
"c11" => "float"
"c13" => "float"
"c15" => "float"
"c21" => "float"
"c23" => "float"
}
}
date {
match => [ "c1", "dd/MM/YYYY HH:mm:ss.SSS", "ISO8601"]
target => "ts_date"
}
mutate {
rename => [ "c17", "[location][lat]",
"c19", "[location][lon]" ]
}
}
output {
elasticsearch {
hosts => localhost
index => "tram3"
manage_template => false
template => "tram3_template.json"
template_name => "tram3"
template_overwrite => "true"
}
stdout {
codec => rubydebug
}
}
The mapping configuration file (tram3_template.json) is like this:
{
"template": "tram3",
"order": 1,
"settings": {
"number_of_shards": 1
},
"mappings": {
"tram3": {
"_all": {
"enabled": false
},
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
When I import de csv file to elasticsearch it seems that all works ok. The output is something like this:
{
"message" => "26/02/2016 00:00:22.984;Total;4231.143555;Trac1;26.547932;Trac2;-338.939697;AA1;-364.611511;AA2;3968.135010;Reo1;0.000000;Reo2;0.000000;Latitud;4125.1846;Longitud;00213.5219;Speed;0.000000;CVS;3873.429443;\r",
"#version" => "1",
"#timestamp" => "2016-04-25T14:02:52.901Z",
"path" => "/opt/logstash-2.3.1/logTest/Dades.csv",
"host" => "ubuntu",
"type" => "Dades",
"c1" => "26/02/2016 00:00:22.984",
"c2" => "Total",
"c3" => 4231.143555,
"c4" => "Trac1",
"c5" => 26.547932,
"c6" => "Trac2",
"c7" => -338.939697,
"c8" => "AA1",
"c9" => -364.611511,
"c10" => "AA2",
"c11" => 3968.13501,
"c12" => "Reo1",
"c13" => 0.0,
"c14" => "Reo2",
"c15" => 0.0,
"c16" => "Latitud",
"c18" => "Longitud",
"c20" => "Speed",
"c21" => 0.0,
"c22" => "CVS",
"c23" => 3873.429443,
"column24" => nil,
"ts_date" => "2016-02-25T23:00:22.984Z",
"location" => {
"lat" => 41.41974333333334,
"lon" => 2.22535
}
}
But when I try to visualize the location parameter in a map it doesn't show any result:
I don't know what I'm doing wrong. Why the location point doesn't appear in the map?
In your ES mapping file, you probably need to enable the storage of the geohash sub-field (defaults to false) as the geohash aggregation cannot work without it.
{
"template": "tram3",
"order": 1,
"settings": {
"number_of_shards": 1
},
"mappings": {
"tram3": {
"_all": {
"enabled": false
},
"properties": {
"location": {
"type": "geo_point",
"geohash": true, <-- add this
"geohash_prefix": true <-- add this
}
}
}
}
}
Then you can build a geohash aggregation on the location.geohash field
Note that if you want to also index all geohash prefixes, you can also add "geohash_prefix": true to your field mapping.
UPDATE
After reproducing the case, here are some more fixes to do:
You need to change the type in your file input as it will be used as the document type and your mapping specifies that the mapping type is named dades2 not Dades:
file {
path => "/opt/logstash-2.3.1/logTest/Dades.csv"
type => "dades2"
start_position => "beginning"
sincedb_path => "/dev/null"
}
Your elasticsearch output should look like below, namely, manage_template should be true and use the full path to your dades2_template.json file (make sure to change /full/path/to with the actual path name.
elasticsearch {
hosts => localhost
index => "dades2"
manage_template => true
template => "/full/path/to/dades2_template.json"
template_name => "dades2"
template_overwrite => "true"
}
The new dades2_template.json file should look like this
{
"template": "dades2",
"order": 1,
"settings": {
"number_of_shards": 1
},
"mappings": {
"dades2": {
"_all": {
"enabled": false
},
"properties": {
"location": {
"type": "geo_point",
"geohash": true,
"geohash_prefix": true
}
}
}
}
}

input json to logstash - config issues?

i have the following json input that i want to dump to logstash (and eventually search/dashboard in elasticsearch/kibana).
{"vulnerabilities":[
{"ip":"10.1.1.1","dns":"z.acme.com","vid":"12345"},
{"ip":"10.1.1.2","dns":"y.acme.com","vid":"12345"},
{"ip":"10.1.1.3","dns":"x.acme.com","vid":"12345"}
]}
i'm using the following logstash configuration
input {
file {
path => "/tmp/logdump/*"
type => "assets"
codec => "json"
}
}
output {
stdout { codec => rubydebug }
elasticsearch { host => localhost }
}
output
{
"message" => "{\"vulnerabilities\":[\r",
"#version" => "1",
"#timestamp" => "2014-10-30T23:41:19.788Z",
"type" => "assets",
"host" => "av12612sn00-pn9",
"path" => "/tmp/logdump/stack3.json"
}
{
"message" => "{\"ip\":\"10.1.1.30\",\"dns\":\"z.acme.com\",\"vid\":\"12345\"},\r",
"#version" => "1",
"#timestamp" => "2014-10-30T23:41:19.838Z",
"type" => "assets",
"host" => "av12612sn00-pn9",
"path" => "/tmp/logdump/stack3.json"
}
{
"message" => "{\"ip\":\"10.1.1.31\",\"dns\":\"y.acme.com\",\"vid\":\"12345\"},\r",
"#version" => "1",
"#timestamp" => "2014-10-30T23:41:19.870Z",
"type" => "shellshock",
"host" => "av1261wag2sn00-pn9",
"path" => "/tmp/logdump/stack3.json"
}
{
"ip" => "10.1.1.32",
"dns" => "x.acme.com",
"vid" => "12345",
"#version" => "1",
"#timestamp" => "2014-10-30T23:41:19.884Z",
"type" => "assets",
"host" => "av12612sn00-pn9",
"path" => "/tmp/logdump/stack3.json"
}
obviously logstash is treating each line as an event and it thinks {"vulnerabilities":[ is an event and i'm guessing the trailing commas on the 2 subsequent nodes mess up the parsing, and the last node appears coorrect. how do i tell logstash to parse the events inside the vulnerabilities array and to ignore the commas at the end of the line?
Updated: 2014-11-05
Following Magnus' recommendations, I added the json filter and it's working perfectly. However, it would not parse the last line of the json correctly without specifying start_position => "beginning" in the file input block. Any ideas why not? I know it parses bottom up by default but would anticipate the mutate/gsub would handle this smoothly?
file {
path => "/tmp/logdump/*"
type => "assets"
start_position => "beginning"
}
}
filter {
if [message] =~ /^\[?{"ip":/ {
mutate {
gsub => [
"message", "^\[{", "{",
"message", "},?\]?$", "}"
]
}
json {
source => "message"
remove_field => ["message"]
}
}
}
output {
stdout { codec => rubydebug }
elasticsearch { host => localhost }
}
You could skip the json codec and use a multiline filter to join the message into a single string that you can feed to the json filter.filter {
filter {
multiline {
pattern => '^{"vulnerabilities":\['
negate => true
what => "previous"
}
json {
source => "message"
}
}
However, this produces the following unwanted results:
{
"message" => "<omitted for brevity>",
"#version" => "1",
"#timestamp" => "2014-10-31T06:48:15.589Z",
"host" => "name-of-your-host",
"tags" => [
[0] "multiline"
],
"vulnerabilities" => [
[0] {
"ip" => "10.1.1.1",
"dns" => "z.acme.com",
"vid" => "12345"
},
[1] {
"ip" => "10.1.1.2",
"dns" => "y.acme.com",
"vid" => "12345"
},
[2] {
"ip" => "10.1.1.3",
"dns" => "x.acme.com",
"vid" => "12345"
}
]
}
Unless there's a fixed number of elements in the vulnerabilities array I don't think there's much we can do with this (without resorting to the ruby filter).
How about just applying the json filter to lines that look like what we want and drop the rest? Your question doesn't make it clear whether all of the log looks like this so this may not be so useful.
filter {
if [message] =~ /^\s+{"ip":/ {
# Remove trailing commas
mutate {
gsub => ["message", ",$", ""]
}
json {
source => "message"
remove_field => ["message"]
}
} else {
drop {}
}
}

Resources