I can not read split fields in Logstash - elasticsearch

I can distinguish the "msg" field in logstash in the following format
filter {
kv {
field_split => "|"
source => "msg"
}
}
Properly seperated.
But then the reserved area "latitude" is not processed
Adding as string
" deviceValue" => "null ",
**"test1" => "%{latitude}"**,
" timeLabel" => "NOON ",
" appllicationName" => "null ",
" longitude" => "29.08222 ",
Thank you for your help

Take a closer look to the parsed values. I believe they are not in fact properly separated. You have spaces in the source data surrounding your split character "|", so when it is parsed you actually don't get a field named "latitude" but " latitude".
From your post:
" longitude" => "29.08222 ",
Do you see the leading space on " longitude" and the trailing one in the value?
I assume you don't need those, so one way to resolve the problem would be to clean the whitespace from the source data and then use your existing scripts.
Alternatively, if you cannot modify the source data, you can set your filter to split on " | ":
filter {
kv {
field_split => " | "
source => "msg"
}
}
And finally, if you indeed need those spaces and cannot change that, you can change "%{latitude}" to "%{ latitude}".

Related

How to grok a pipe-delimited string in a log line

I need to grok a pipe-delimited string of values in a grok line; for example:
|NAME=keith|DAY=wednesday|TIME=09:27:423227|DATE=08/06/2019|amount=68.23|currency=USD|etc...
What is the easiest way to do this?
Is there any form of a grok split?
Thanks,
Keith
Your scenario is the perfect use case of logstashs kv (key-value) filter!
The basic idea behind this filter plugin is to extract key-value pairs in a repetitive pattern like yours.
In this case the field_split character would be the pipe ( | ).
To distinguish keys from values you would set the value_split character to the equal sign ( = ).
Here's a sample but untested filter configuration:
filter{
kv{
source => "your_field_name"
target => "kv"
field_split => "\|"
value_split => "="
}
}
Notice how the pipe character in the field_split setting is escaped. Since the pipe is a regex-recognized character, you have to escape it!
This filter will extract all found key-value pairs from your source field and set it into the target named "kv" (the name is arbitrary) from that you can access the fields.
You might want to take a look at the other possible settings of the kv filter to satisfy your needs.
I hope I could help you! :-)

Using Logstash Ruby filter to parse csv file

I have an elasticsearch index which I am using to index a set of documents.
These documents are originally in csv format and I am looking parse these using logstash.
My problem is that I have something along the following lines.
field1,field2,field3,xyz,abc
field3 is something like 123456789 and I want to parse it as 4.56(789) using ruby code filter.
My try:
I tried with stdin and stdout with the following logstash.conf .
input {
stdin {
}
}
filter {
ruby {
code => "
b = event["message"]
string2=""
for counter in (3..(num.size-1))
if counter == 4
string2+= '_'+ num[counter]
elsif counter == 6
string2+= '('+num[counter]
elsif counter == 8
string2+= num[counter] +')'
else
string2+= num[counter]
end
end
event["randomcheck"] = string2
"
}
}
output {
stdout {
codec=>rubydebug
}
}
I am getting syntax error using this.
My final aim is to use this with my csv file , but first I was trying this with stdin and stdout.
Any help will be highly appreciated.
The reason you're getting a syntax error is most likely because you have unescaped double quotes inside the double quoted string. Either make the string single quoted or keep it double quoted but use single quotes inside. I also don't understand how that code is supposed to work.
But that aside, why use a ruby filter in the first place? You can use a csv filter for the CSV parsing and a couple of standard filters to transform 123456789 to 4.56(789).
filter {
# Parse the CSV fields and then delete the 'message' field.
csv {
remove_field => ["message"]
}
# Given an input such as 123456789, extract 4, 56, and 789 into
# their own fields.
grok {
match => [
"column3",
"\d{3}(?<intpart>\d)(?<fractionpart>\d{2})(?<parenpart>\d{3})"
]
}
# Put the extracted fields together into a single field again,
# then delete the temporary fields.
mutate {
replace => ["column3", "%{intpart}.%{fractionpart}(%{parenpart})"]
remove_field => ["intpart", "factionpart", "parenpart"]
}
}
The temporary fields have really bad names in the example above since I don't know what they represent. Also, depending on what the input can look like you may have to adjust the grok expression. As it stands now it assumes nine-digit input.

How would I dynamically reassign a variable based on the user's text input?

I have
board_a1 = " [ ] "
board_a2 = " [ ] "
board_a3 = " [ ] "
etc. I prompt the user, and get input:
puts "Choose your square"
user_input = gets.chomp.downcase
If the user types in "a2", I need to reassign board_a2:
board_a2 = " [X] "
How would I match the user input to reassign the proper board square? I thought I could concatenate the pieces like so:
board_assignment = "board_#{user_input}"
Then use
board_assignment = " [X] "
But board_assignment != board_a2. It's a new variable. I need to reassign the proper variable based on the user input right after the user types it in. Is there some special syntax to concatenate strings that can then represent the existing variable? What do you call this situation, and how would I go about getting my dynamic variable set?
You should use a single variable, which contains a hash, mapping the space's name to its current state:
board = {
"a1" => " [ ] ",
"a2" => " [ ] ",
"a3" => " [ ] "
}
The rest becomes pretty self-explanatory. When the user enters "a2", modify the value at board["a2"].

in a rebol function which gets a variable as argument, how to retrieve the original variable

I'm debugging a script which doesn't behave like I wished;
I'm writing a small debug utility function, which would print the 'word followed by its value.
I wrote this:
debug: func [x] [print rejoin ['x " => " x] wait 0.5 ]
And, in my code, I'd like to call it simply like this:
phrase: "beer is good"
mots: parse phrase " "
debug phrase
foreach mot mots [
debug mot
;do something...
]
and I would dream that it would output on the console something like:
phrase => "beer is good"
mot => "beer"
mot => "is"
mot => "good"
but I can't find a way to retrieve the variable's original name, that is, its name out of the scope of the function.
If you look at your 'debug function and compare it to the source of '??, you'll see what you would have needed to do differently:
??: func [
{Prints a variable name followed by its molded value. (for debugging)}
'name
][
print either word? :name [head insert tail form name reduce [": " mold name: get name]] [mold :name]
:name
]
I had not searched enough...
?? variable
does what I needed.
>> ? ??
USAGE:
?? 'name
DESCRIPTION:
Prints a variable name followed by its molded value. (for debugging)
?? is a function value.
ARGUMENTS:
name -- (Type: any)

How can I extract the words from this string " !!one!! **two** ##three##" using Regex in Ruby

In IRB, I can do this:
c = /(\b\w+\b)\W*(\b\w+\b)\W*(\b\w+\b)\W*/.match(" !!one** *two* ##three## ")
And get this:
=> MatchData "one** *two* ##three## " 1:"one" 2:"two" 3:"three"
But assuming I don't know the number of words in advance, how can I still extract all words out of the string". For example, it might be " !!one** *two* ##three## " in one instance, but might be " !!five** *six* " in another instance.
Thanks.
> " !!one** *two* ##three## ".scan(/\w+/)
=> ["one", "two", "three"]
Also, scan can return array of arrays in case of using ().
> "Our fifty users left 500 posts this month.".scan(/([a-z]+|\d+)\s+(posts|users)/i)
=> [["fifty", "users"], ["500", "posts"]]
http://ruby-doc.org/core/classes/String.html#M000812

Resources