logstash sum values from jdbc input - elasticsearch

I'm new on logstash and i'm trying to sum the value from two colums from my database and generate a new metric.
I've exhausted all my alternatives.
this is my conf file. the new varible that I'm creating is the 'tod_ped'
I created another variable to try to understand what is happenig with the value of 'TOT_PROD', the variable is 'valor'.
the two colums that i`m trying to sum is 'TOT_PROD' and 'TOT_SERV'.
input {
jdbc {
jdbc_driver_library => "jtds-1.3.1.jar"
jdbc_driver_class => "Java::net.sourceforge.jtds.jdbc.Driver"
jdbc_connection_string => "jdbc:jtds:sqlserver://xxxxxx:1433/dbPHXPSS"
jdbc_user => "readonly"
jdbc_password => "xxxxx"
statement => "SELECT [NVENDA]
,[CPROJETO]
,[TECNOLOGIA]
,[PREVISAO]
,[APROVACAO]
,[STATUS]
,[CLIENTE]
,[TITULO]
,[TOT_PROD]
,[TOT_SERV]
,[QTD_H_FE]
,[QTD_H_SE]
,[QTD_H_PM]
,[QTD_H_DES]
,[TOT_DESPESA]
,[VENDEDOR]
,[TIPO_SOLICITACAO]
,[TECNOLOGIAPROJ]
FROM [dbPHXPSS].[dbo].[VW_PROVISAOPROJETOS]
where QTD_H_PM IS NOT NULL"
}
}
filter {
ruby {
code =>"
hash = event.to_hash
hash.each do |k,v|
if v == nil
event.set(k,'0')
end
if k == 'TOT_PROD'
event.set(teste, v)
end
end
# testing the content from de varible 'TOT_PROD'
event.set('valor', event.get('teste'))
"
}
mutate {
convert => ["TOT_PROD","float_eu"]
}
ruby {
code =>"
# adding the values to 'tot_ped'
event.set('tot_ped', (event.get('TOT_PROD').to_f + event.get('TOT_SERV').to_f ))
"
}
}
output {
elasticsearch {
hosts => "localhost"
index => "phoenix"
document_type => "phxdb"
}
stdout {}
}
this is the return code from logstash.
What i've noticed is that the varible 'tot_ped' is not adding the values, and the test varible is returning the value of 'TOT_PROD' as nil.
{
"#version" => "1",
"cliente" => "OAB SP ",
"vendedor" => "Sxxxxxxxx",
"status" => "MEDIA",
"tipo_solicitacao" => "0",
"aprovacao" => "0",
"tot_serv" => 0.0,
"cprojeto" => "0",
"qtd_h_se" => 132.0,
"qtd_h_pm" => 24.0,
"valor" => nil,
"tot_ped" => 0.0,
"qtd_h_des" => "0",
"#timestamp" => 2019-03-15T13:42:15.243Z,
"tot_prod" => 134133.7195,
"tot_despesa" => "0",
"titulo" => "Projeto Wifi",
"previsao" => "0",
"tecnologia" => "VSF",
"tecnologiaproj" => "0",
"qtd_h_fe" => 56.0,
"nvenda" => 20361.0
}
Thank you advanced.

I fixed!
The problem were that I was using CAPS on the name of my variable, I just changed to lower case, and worked!
ruby {
code =>"
event.set('tot_ped', ((event.get('tot_prod').to_f * 2.25) + event.get('tot_serv').to_f ))
"
}

Related

logstash elasticsearch ouput plugin script example to add value to array filed?

Hello I am getting this error when I try to add value to existing array field in elasticseach, and my logstash output configuration is:
elasticsearch {
document_id => 1
action => "update"
hosts => ["X.X.X.X:9200"]
index => "test"
script_lang => "painless"
script_type => "inline"
script => 'ctx._source.arrat.add(event("[file][fuid]"))'
}
The error i was getting is
error"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to execute script", "caused_by"=>{"type"=>"script_exception", "reason"=>"compile error", "script_stack"=>["ctx._source.arrat.add(event(\"[file][fuid]\"))", " ^---- HERE"], "script"=>"ctx._source.arrat.add(event(\"[file][fuid]\"))", "lang"=>"painless", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Unknown call [event] with [1] arguments."}}}}}}.
Below is the logstash configuration
input {
beats {
port => "12109"
}
}
filter {
mutate {
id => "brolog-files-rename-raw-fields"
rename => { "[ts]" => "[file][ts]"
"[fuid]" => "[file][fuid]"
"[tx_hosts]" => "[file][tx_hosts]"
"[rx_hosts]" => "[file][rx_hosts]"
"[conn_uids]" => "[file][conn_uids]"
"[source]" => "[file][source]"
"[depth]" => "[file][depth]"
"[analyzers]" => "[file][analyzers]"
"[mime_type]" => "[file][mime_type]"
"[duration]" => "[file][duration]"
"[is_orig]" => "[file][is_orig]"
"[seen_bytes]" => "[file][seen_bytes]"
"[missing_bytes]" => "[file][missing_bytes]"
"[overflow_bytes]" => "[file][overflow_bytes]"
"[timedout]" => "[file][timedout]"
"[md5]" => "[file][md5]"
"[sha1]" => "[file][sha1]"
}
}
}
output{
stdout { codec => rubydebug}
elasticsearch {
document_id => 1
action => "update"
doc_as_upsert => "true"
hosts => ["X.X.X.X:9200"]
index => "test"
script_lang => "painless"
script_type => "inline"
script => 'ctx._source.arrat.add(event.[file][fuid])'
}
}
i am getting data in json format.

Set Null values pulled from Mysql to Elasticsearch to a default value using logstash input jdbc plugin

The fields that I want to get them with default value could be NULL in MYSQL.
This is my configuration for the logstash plugin.
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/elements"
jdbc_user => "user"
jdbc_password => "admin"
jdbc_validate_connection => true
jdbc_driver_library => "C:/work/Wildfly/wildfly-9.0.2.Final/modules/com/mysql/main/mysql-connector-java-5.1.36.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement_filepath => "query.sql"
use_column_value => true
tracking_column => id
#schedule => "*/3 * * * *"
clean_run => true
}
}
output {
elasticsearch {
index => "emptytest"
document_type => "history"
document_id => "%{id}"
hosts => "localhost"
}
}
I tried to do a filter to test on the null values but it does not detect them.
if [sourcecell_id] == "NULL" {
mutate {
}
}
The farthest thing that I did is to delete the field via a ruby script but I dont want to delete I want to replace it with default value like 0 for example.
This is the ruby script:
filter {
ruby {
code => "
hash = event.to_hash
hash.each do |k,v|
if v == nil
event.remove(k)
end
end
"
}
}
I found the solution is a ruby script. I hope that it will help another ones.
filter {
ruby {
code => "
hash = event.to_hash
hash.each do |k,v|
if v == nil
event[k] = 0
end
end
"
}
}

Can I use mutate filter in Logstash to convert some fields to integers of a genjdbc input?

I am using genjdbc input plugin for Logstash to get data from a DB2 database. It works perfectly, I get in Kibana all the database columns as fields.
The problem I have is that in Kibana all fields are string type, and I want the numeric fields to be integers. I have tried the following code, but the result is the same that if no filter clause exists.
Can someone help me solving this? Thanks in advance!
The logstash.conf code:
input {
genjdbc {
jdbcHost => "XXX.XXX.XXX.XXX"
jdbcPort => "51260"
jdbcTargetDB => "db2"
jdbcDBName => "XXX"
jdbcUser => "XXX"
jdbcPassword => "XXX"
jdbcDriverPath => "C:\...\db2jcc4.jar"
jdbcSQLQuery => "SELECT * FROM XXX1"
jdbcTimeField => "LOGSTAMP"
jdbcPStoreFile => "C:\elk\logstash\bin\db2.pstore"
jdbcURL => "jdbc:db2://XXX.XXX.XXX.XXX:51260/XXX"
type => "table1"
}
genjdbc {
jdbcHost => "XXX.XXX.XXX.XXX"
jdbcPort => "51260"
jdbcTargetDB => "db2"
jdbcDBName => "XXX"
jdbcUser => "XXX"
jdbcPassword => "XXX"
jdbcDriverPath => "C:\...\db2jcc4.jar"
jdbcSQLQuery => "SELECT * FROM XXX2"
jdbcTimeField => "LOGSTAMP"
jdbcPStoreFile => "C:\elk\logstash\bin\db2.pstore"
jdbcURL => "jdbc:db2://XXX.XXX.XXX.XXX:51260/XXX"
type => "table2"
}
}
filter {
mutate {
convert => [ "T1", "integer" ]
convert => [ "T2", "integer" ]
convert => [ "T3", "integer" ]
}
}
output {
if [type] == "table1" {
elasticsearch {
host => "localhost"
protocol => "http"
index => "db2_1-%{+YYYY.MM.dd}"
}
}
if [type] == "table2" {
elasticsearch {
host => "localhost"
protocol => "http"
index => "db2_2-%{+YYYY.MM.dd}"
}
}
}
What you have should work as long as the fields you are trying to convert to integer are names T1,T2,T3 and you are inserting into an index that doesn't have any data. If you already have data in the index, you'll need to delete the index so that logstash can recreate it with the correct mapping.

Understanding attributes in AWS DynamoDB with Ruby

I can't seem to wrap my head around the AWS Ruby SDK documentation for DynamoDB (or more specifically the concepts of the DynamoDB data model).
Specifically I've been reading: http://docs.aws.amazon.com/AWSRubySDK/latest/frames.html#!AWS/DynamoDB.html
Note: I have read through the Data Model documentation as well and it's still not sinking in; I'm hoping a proper example in Ruby with clear up my confusion
In the following code snippet, I create a table called "my_books" which has a primary_key called "item_id" and it's a Hash key (not a Hash/Range combination)...
dyn = AWS::DynamoDB::Client::V20120810.new
# => #<AWS::DynamoDB::Client::V20120810>
dyn.create_table({
:attribute_definitions => [
{ :attribute_name => "item_id", :attribute_type => "N" }
],
:table_name => "my_books",
:key_schema => [
{ :attribute_name => "item_id", :key_type => "HASH" },
],
:provisioned_throughput => {
:read_capacity_units => 10,
:write_capacity_units => 10
}
})
# => {:table_description=>{:attribute_definitions=>[{:attribute_name=>"item_id", :attribute_type=>"N"}], :table_name=>"my_books", :key_schema=>[{:attribute_name=>"item_id", :key_type=>"HASH"}], :table_status=>"ACTIVE", :creation_date_time=>2014-11-24 16:59:47 +0000, :provisioned_throughput=>{:number_of_decreases_today=>0, :read_capacity_units=>10, :write_capacity_units=>10}, :table_size_bytes=>0, :item_count=>0}}
dyn.list_tables
# => {:table_names=>["my_books"]}
dyn.scan :table_name => "my_books"
# => {:member=>[], :count=>0, :scanned_count=>0}
I then try and populate the table with a new item. My understanding is that I should specify the numerical value for item_id (which is the primary key) and then I could specify other attributes for the new item/record/document I'm adding to the table...
dyn.put_item(
:table_name => "my_books",
:item => {
"item_id" => 1,
"item_title" => "My Book Title",
"item_released" => false
}
)
But that last command returns the following error:
expected hash value for value at key item_id of option item
So although I don't quite understand what the hash will be made of, I try doing that:
dyn.put_item(
:table_name => "my_books",
:item => {
"item_id" => { "N" => 1 },
"item_title" => "My Book Title",
"item_released" => false
}
)
But this now returns the following error...
expected string value for key N of value at key item_id of option item
I've tried different variations, but can't seem to figure out how this works?
EDIT/UPDATE: as suggested by Uri Agassi - I changed the value from 1 to "1". I'm not really sure why this has to be quoted as I've defined the type to be a number and not a string, but OK let's just accept this and move on.
I've finally figured out most of what I needed to understand the data model of DynamoDB and using the Ruby SDK.
Below is my example code, which hopefully will help someone else, and I've got a fully fleshed out example here: https://gist.github.com/Integralist/9f9f2215e001b15ac492#file-3-dynamodb-irb-session-rb
# https://github.com/BBC-News/alephant-harness can automate the below set-up when using Spurious
# API Documentation http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Operations.html
# Ruby SDK API Documentation http://docs.aws.amazon.com/AWSRubySDK/latest/frames.html#!AWS/DynamoDB/Client/V20120810.html
require "aws-sdk"
require "dotenv"
require "spurious/ruby/awssdk/helper"
Spurious::Ruby::Awssdk::Helper.configure
# => <AWS::Core::Configuration>
Dotenv.load(
File.join(
File.dirname(__FILE__), "config", "development", "env.yaml"
)
)
# => {"AWS_REGION"=>"eu-west-1", "AWS_ACCESS_KEY_ID"=>"development_access", "AWS_SECRET_ACCESS_KEY"=>"development_secret", "DYNAMO_LU"=>"development_lookup", "DYNAMO_SQ"=>"development_sequence", "SQS_QUEUE"=>"development_queue", "S3_BUCKET"=>"development_bucket"}
dyn = AWS::DynamoDB::Client.new :api_version => "2012-08-10"
dyn = AWS::DynamoDB::Client::V20120810.new
# => #<AWS::DynamoDB::Client::V20120810>
dyn.create_table({
# This section requires us to define our primary key
# Which will be called "item_id" and it must be a numerical value
:attribute_definitions => [
{ :attribute_name => "item_id", :attribute_type => "N" }
],
:table_name => "my_books",
# The primary key will be a simple Hash key (not a Hash/Range which requires both key types to be provided)
# The attributes defined above must be included in the :key_schema Array
:key_schema => [
{ :attribute_name => "item_id", :key_type => "HASH" }
],
:provisioned_throughput => {
:read_capacity_units => 10,
:write_capacity_units => 10
}
})
# => {:table_description=>{:attribute_definitions=>[{:attribute_name=>"item_id", :attribute_type=>"N"}], :table_name=>"my_books", :key_schema=>[{:attribute_name=>"item_id", :key_type=>"HASH"}], :table_status=>"ACTIVE", :creation_date_time=>2014-11-24 16:59:47 +0000, :provisioned_throughput=>{:number_of_decreases_today=>0, :read_capacity_units=>10, :write_capacity_units=>10}, :table_size_bytes=>0, :item_count=>0}}
dyn.list_tables
# => {:table_names=>["my_books"]}
dyn.scan :table_name => "my_books"
# => {:member=>[], :count=>0, :scanned_count=>0}
dyn.put_item(
:table_name => "my_books",
:item => {
"item_id" => { "N" => "1" }, # oddly this needs to be a String and not a strict Integer?
"item_title" => { "S" => "My Book Title"},
"item_released" => { "B" => "false" }
}
)
# Note: if you use an "item_id" that already exists, then the item will be updated.
# Unless you use the "expected" conditional feature
dyn.put_item(
:table_name => "my_books",
:item => {
"item_id" => { "N" => "1" }, # oddly this needs to be a String and not a strict Integer?
"item_title" => { "S" => "My Book Title"},
"item_released" => { "B" => "false" }
},
# The :expected key specifies the conditions of our "put" operation.
# If "item_id" isn't NULL (i.e. it exists) then our condition has failed.
# This means we only write the value when the key "item_id" hasn't been set.
:expected => {
"item_id" => { :comparison_operator => "NULL" }
}
)
# AWS::DynamoDB::Errors::ConditionalCheckFailedException: The conditional check failed
dyn.scan :table_name => "my_books"
# => {:member=>[{"item_id"=>{:n=>"1"}, "item_title"=>{:s=>"My Book Title"}, "item_released"=>{:b=>"false"}}], :count=>1, :scanned_count=>1}
dyn.query :table_name => "my_books", :consistent_read => true, :key_conditions => {
"item_id" => {
:comparison_operator => "EQ",
:attribute_value_list => [{ "n" => "1" }]
},
"item_title" => {
:comparison_operator => "EQ",
:attribute_value_list => [{ "s" => "My Book Title" }]
}
}
# => {:member=>[{"item_id"=>{:n=>"1"}, "item_title"=>{:s=>"My Book Title"}, "item_released"=>{:b=>"false"}}], :count=>1, :scanned_count=>1}
dyn.query :table_name => "my_books",
:consistent_read => true,
:select => "SPECIFIC_ATTRIBUTES",
:attributes_to_get => ["item_title"],
:key_conditions => {
"item_id" => {
:comparison_operator => "EQ",
:attribute_value_list => [{ "n" => "1" }]
},
"item_title" => {
:comparison_operator => "EQ",
:attribute_value_list => [{ "s" => "My Book Title" }]
}
}
# => {:member=>[{"item_title"=>{:s=>"My Book Title"}}], :count=>1, :scanned_count=>1}
dyn.delete_item(
:table_name => "my_books",
:key => {
"item_id" => { "n" => "1" }
}
)
# => {:member=>[], :count=>0, :scanned_count=>0}

How can I get a key => value out of this foursquare hash?

Here is what it looks like:
{
"groups" => [
{ "venues" => [
{ "city" => "Madrid",
"address" => "Camino de Perales, s/n",
"name" => "Caja Mágica",
"stats" => {"herenow"=>"0"},
"geolong" => -3.6894333,
"primarycategory" => {
"iconurl" => "http://foursquare.com/img/categories/arts_entertainment/stadium.png",
"fullpathname" => "Arts & Entertainment:Stadium",
"nodename" => "Stadium",
"id" => 78989 },
"geolat" => 40.375045,
"id" => 2492239,
"distance" => 0,
"state" => "Spain" }],
"type" => "Matching Places"}]
}
Big and ugly... I just want to grab the id out. How would I go about doing this?
h = { "groups" => ......... }
The two ids are:
h["groups"][0]["venues"][0]["primarycategory"]["id"]
h["groups"][0]["venues"][0]["id"]
If the hash stores one id:(assuming the value is stored in a variable called hash)
hash["groups"][0]["venues"][0]["primarycategory"]["id"] rescue nil
If the hash stores multiple ids then:
ids = Array(hash["groups"]).map do |g|
Array(g["venues"]).map do |v|
v["primarycategory"]["id"] rescue nil
end.compact
end.flatten
The ids holds the array of id's.

Resources