Logstash split fields - filter

Actually im trying to split a field's request with & caracter
kv {
source => "request"
field_split => "&?"
target => "params"
}
For example when the request is:
toto=a&titi=b$tata=c
I get this:
{
"toto":"a",
"titi":"b",
"tata":"c",
}
But same times the value of field contains the & caracter like:
toto=a&titi=b$tata=c..tt&d=lolo
Then with my filter i will get:
{
"toto":"a",
"titi":"b",
"tata":"cc..tt",
"d":"lolo"
}
Or i want to get:
{
"toto":"a",
"titi":"b",
"tata":"c..tt&d=lolo"
}
Is there any solution to intercept c..tt&d=lolo as a value of tata ?
B.R

Related

Lowercase field name in Logstash for Elasticsearch index

I have a logstash command that I'm piping a file to that will write to Elasticsearch. I want to use one field to select the index I will write to (appName). However the data in this field is not all lowercase so I need to do that when selecting the index but I don't want the data in the document itself to be modified.
I have an attempt below where I first copy the original field (appName) to a new one (appNameIndex), lowercase the new field, remove it from the upload and then use it pick the index.
input {
stdin { type => stdin }
}
filter {
csv {
separator => " "
columns => ["appName", "field1", "field2", ...]
convert => {
...
}
}
filter {
mutate {
copy => ["appName", "appNameIndex"]
}
}
filter {
mutate {
lowercase => ["appNameIndex"]
}
}
filter {
mutate {
remove_field => [
"appNameIndex", // if I remove this it works
...
]
}
}
output {
amazon_es {
hosts =>
["my-es-cluster.us-east-1.es.amazonaws.com"]
index => "%{appNameIndex}"
region => "us-east-1"
}
}
However I am getting errors that say
Invalid index name [%{appIndexName}]
Clearly it's not grabbing my mutation. Is it because the remove section takes it out entirely? I was hoping that just removed it from the document upload. Am I going about this incorrectly?
UPDATE I tried taking out the remove index name part and it does in fact work, so that helps identify the source of the error. Now the question becomes how do I get around it. With that part of the config removed I essentially have two fields with the same data, one lowercased and one not
You can define a #metadata field that is a special field which will never be included in the output https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#metadata.
input {
stdin { type => stdin }
}
filter {
csv {
separator => " "
columns => ["appName", "field1", "field2", ...]
convert => {
...
}
}
filter {
mutate {
copy => ["appName", "[#metadata][appNameIndex]"]
}
}
filter {
mutate {
lowercase => ["[#metadata][appNameIndex]"]
}
}
output {
amazon_es {
hosts => ["my-es-cluster.us-east-1.es.amazonaws.com"]
index => "%{[#metadata][appNameIndex]}"
region => "us-east-1"
}
}

Create a new field with ruby filter

So here the params is a hash, and I would like to concatenate all of the keys and store it to a new field, how can I achieve that? I found that is it possible to use inline ruby code in the configuration file but, I have no idea how to assign the return value of the concat.
grok { match => { "request" => [ "url", "%{URIPATH:url_path}%{URIPARAM:url_params}?" ]} }
urldecode{ field => "url_path" }
mutate { gsub => ["url_params","\?","" ] }
kv {
field_split => "&"
source => "url_params"
target => "params"
}
urldecode{ field => "params" }
ruby {
code => 'pattern= params.keys.join(",")'
#Pattern should be the new field that contains the key, separated by comma
}
Expected result should be:
pattern = "param1,param2,param3 ... and so on"
Solution is:
event.set('field_name', field_value)

how filter {"foo":"bar", "bar": "foo"} with grok to get only foo field?

I copied
{"name":"myapp","hostname":"banana.local","pid":40161,"level":30,"msg":"hi","time":"2013-01-04T18:46:23.851Z","v":0}
from https://github.com/trentm/node-bunyan and save it as my logs.json. I am trying to import only two fields (name and msg) to ElasticSearch via LogStash. The problem is that I depend on a sort of filter that I am not able to accomplish. Well I have successfully imported such line as a single message but certainly it is not worth in my real case.
That said, how can I import only name and msg to ElasticSearch? I tested several alternatives using http://grokdebug.herokuapp.com/ to reach an useful filter with no success at all.
For instance, %{GREEDYDATA:message} will bring the entire line as an unique message but how to split it and ignore all other than name and msg fields?
At the end, I am planing to use here:
input {
file {
type => "my_type"
path => [ "/home/logs/logs.log" ]
codec => "json"
}
}
filter {
grok {
match => { "message" => "data=%{GREEDYDATA:request}"}
}
#### some extra lines here probably
}
output
{
elasticsearch {
codec => json
hosts => "http://127.0.0.1:9200"
index => "indextest"
}
stdout { codec => rubydebug }
}
I have just gone through the list of available Logstash filters. The prune filter should match your need.
Assume you have installed the prune filter, your config file should look like:
input {
file {
type => "my_type"
path => [ "/home/logs/logs.log" ]
codec => "json"
}
}
filter {
prune {
whitelist_names => [
"#timestamp",
"type",
"name",
"msg"
]
}
}
output {
elasticsearch {
codec => json
hosts => "http://127.0.0.1:9200"
index => "indextest"
}
stdout { codec => rubydebug }
}
Please be noted that you will want to keep type for Elasticsearch to index it into a correct type. #timestamp is required if you will view the data on Kibana.

Logstash - find length of split result inside mutate

I'm newbie with Logstash. Currently i'm trying to parse a log in CSV format. I need to split a field with whitespace delimiter, then i'll add new field(s) based on split result.
Here is the filter i need to create:
filter {
...
mutate {
split => ["user", " "]
if [user.length] == 2 {
add_field => { "sourceUsername" => "%{user[0]}" }
add_field => { "sourceAddress" => "%{user[1]}" }
}
else if [user.length] == 1 {
add_field => { "sourceAddress" => "%{user[0]}" }
}
}
...
}
I got error after the if script.
Please advice, is there any way to capture the length of split result inside mutate plugin.
Thanks,
Heri
According to your code example I suppose that you are done with csv parsing and you already have a field user which has either a value that contains a sourceAddress or a value that contains a sourceUsername sourceAddress (separated by whitespace).
Now, there are a lot of filters that can be used to retrieve further fields. You don't need to use the mutate filter to split the field. In this case, a more flexible approach would be the grok filter.
Filter:
grok {
match => {
"user" => [
"%{WORD:sourceUsername} %{IP:sourceAddress}",
"%{WORD:sourceUsername}"
]
}
}
A field "user" => "192.168.0.99" would result in
"sourceAddress" => "191.168.0.99".
A field "user" => "Herry 192.168.0.99" would result in
"sourceUsername" => "Herry", "sourceAddress" => "191.168.0.99"
Of course you can change IP to WORD if your sourceAddress is not an IP.

reparsing a logstash record? fix extracts?

I'm taking a JSON message (Cloudtrail, many objects concatenated together) and by the time I'm done filtering it, Logstash doesn't seem to be parsing the message correctly. It's as if the hash was simply dumped into a string.
Anyhow, here's the input and filter.
input {
s3 {
bucket => "stanson-ops"
delete => false
#snipped unimportant bits
type => "cloudtrail"
}
}
filter {
if [type] == "cloudtrail" {
json { # http://logstash.net/docs/1.4.2/filters/json
source => "message"
}
ruby {
code => "event['RecordStr'] = event['Records'].join('~~~')"
}
split {
field => "RecordStr"
terminator => "~~~"
remove_field => [ "message", "Records" ]
}
}
}
By the time I'm done, elasticsearch entries include a RecordStr key with the following data. It doesn't have a message field, nor does it have a Records field.
{"eventVersion"=>"1.01", "userIdentity"=>{"type"=>"IAMUser", "principalId"=>"xxx"}}
Note that is not JSON style, it's been parsed. (which is important for the concat->split thing to work).
So, the RecordStr key looks not quite right as one value. Further, in Kibana, filterable fields include RecordStr (no subfields). It includes some entries that aren't there anymore: Records.eventVersion, Records.userIdentity.type.
Why is that? How can I get the proper fields?
edit 1 here's part of the input.
{"Records":[{"eventVersion":"1.01","userIdentity":{"type":"IAMUser",
It's unprettified JSON. It appears the body of the file (the above) is in the message field, json extracts it and I end up with an array of records in the Records field. That's why I join and split it- I then end up with individual documents, each with a single RecordStr entry. However, the template(?) doesn't seem to understand the new structure.
I've worked out a method that allows for indexing the appropriate CloudTrail fields as you requested. Here are the modified input and filter configs:
input {
s3 {
backup_add_prefix => \"processed-logs/\"
backup_to_bucket => \"test-bucket\"
bucket => \"test-bucket\"
delete => true
interval => 30
prefix => \"AWSLogs/<account-id>/CloudTrail/\"
type => \"cloudtrail\"
}
}
filter {
if [type] == \"cloudtrail\" {
json {
source => \"message\"
}
ruby {
code => \"event.set('RecordStr', event.get('Records').join('~~~'))\"
}
split {
field => \"RecordStr\"
terminator => \"~~~\"
remove_field => [ \"message\", \"Records\" ]
}
mutate {
gsub => [
\"RecordStr\", \"=>\", \":\"
]
}
mutate {
gsub => [
\"RecordStr\", \"nil\", \"null\"
]
}
json {
skip_on_invalid_json => true
source => \"RecordStr\"
target => \"cloudtrail\"
}
mutate {
add_tag => [\"cloudtrail\"]
remove_field=>[\"RecordStr\", \"#version\"]
}
date {
match => [\"[cloudtrail][eventTime]\",\"ISO8601\"]
}
}
}
The key observation here is that once the split is done we no longer possess valid json in the event and are therefore required to execute the mutate replacements ('=>' to ':' and 'nil' to 'null'). Additionally, I found it useful to get the timestamp out of the CloudTrail eventTime and do some cleanup of unnecessary fields.

Resources