Only one Elasticsearch jdbc input is executing - elasticsearch

I have two jdbc inputs in my logstash.conf file. The file validates and starts fine and I can see the pipeline running.
The second query shows up in the log and processes fine, but the first jdbc input query never even tries to run (at least there are no references to it in the log).
I use an identical template for all of the jdbc settings, so I know that is correct. The only difference is the name of the statement_filepath, but both of those files execute fine in Toad and return data.
input {
jdbc {
jdbc_driver_library => "/iappl/confluent-4.1.1/share/java/kafka-connect-jdbc/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "..."
jdbc_user => "..."
jdbc_password => "..."
schedule => "*/30 * * * * * "
statement_filepath => "/iappl/log_conf/current/configs/scania/sql/V02_INBOUNDLOAD.sql"
type => "V02_INBOUND"
}
jdbc {
jdbc_driver_library => "/iappl/confluent-4.1.1/share/java/kafka-connect-jdbc/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "..."
jdbc_user => "..."
jdbc_password => "..."
schedule => "*/30 * * * * * "
statement_filepath => "/iappl/log_conf/current/configs/scania/sql/V02_OUTBOUNDLOAD.sql"
type => "V02_OUTBOUND"
}
}
In the log, the second query shows up on schedule, but the first one never does, and there is no mention of it failing in the log.
Ideas?

Related

How to insert multiple table values into each table?

How to insert multiple table values into each table?
Using logstash, I want to put multiple tables as elasticsearch.
I used logstash several times using jdbc
but only one value is saved in one table.
I tried to answer the stackoverflow, but I couldn't solve it.
-> multiple inputs on logstash jdbc
This is my confile code.
This code is the code that I executed by myself.
input {
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/db_name?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from table_name1"
tracking_column => "table_name1"
use_column_value => true
clean_run => true
}
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/db_name?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from table_name2"
tracking_column => "table_name2"
use_column_value => true
clean_run => true
}
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/db_name?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from table_name3"
tracking_column => "table_name3"
use_column_value => true
clean_run => true
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "aws_05181830_2"
document_type => "%{type}"
document_id => "{%[#metadata][document_id]}"
}
stdout {
codec => rubydebug
}
}
problem
1. If you look at the picture, save only one value in one table
2. When a new table comes, the existing table value disappears.
My golas
How to save properly without duplicate data in each table?
You are setting the document_id of the document in elasticsearch using
document_id => "{%[#metadata][document_id]}"
This is not a valid sprintf reference, so it uses the literal value {%[#metadata][document_id]}. As a result, every document you index overwrites the previous document. I suggest you remove this option.

logstash input plugin for postgresql issue - duplication (ignoring last run state)

I am using jdbc plugin to fetch data from postgresql db, it seems to be work fine for entire export and i am able to pull the data, but it is not working according to saved state, everytime all of data is queried and there are lot of duplicates.
I checked the .logstash_jdbc_last_run. The metadata state is updated as required, still plugin is importing entire data from table on every run. If any thing wrong in config.
input
{
jdbc {
jdbc_connection_string => "jdbc:postgresql://x.x.x.x:5432/dodb"
jdbc_user => "myuser"
jdbc_password => "passsword"
jdbc_validate_connection => true
jdbc_driver_library => "/opt/postgresql-9.4.1207.jar"
jdbc_driver_class => "org.postgresql.Driver"
statement => "select id,timestamp,distributed_query_id,distributed_query_task_id, "columns"->>'uid' as uid, "columns"->>'name' as name from distributed_query_result;"
schedule => "* * * * *"
use_column_value => true
tracking_column => "id"
tracking_column_type => "numeric"
clean_run => true
}
}
output
{
kafka{
topic_id => "psql-logs"
bootstrap_servers => "x.x.x.x:9092"
codec => "json"
}
}
Any help !! Thanks in advance,, I used below doc for reference.
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html

Logstash is indexing only one row of select query from mysql to elastic search

I am trying to index data from mysql db to elasticsearch using logstash. Logstash is running without errors but the problem is, it indexing only one row from my SELECT query.
Below are the versions of softwares I am using:
elastic search : 2.4.1
logstash: 5.1.1
mysql: 5.7.17
jdbc_driver_library: mysql-connector-java-5.1.40-bin.jar
I am not sure if this is because logstash and elasticsearch versions are different.
Below is my pipeline configuration:
input {
jdbc {
jdbc_driver_library => "mysql-connector-java-5.1.40-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
jdbc_user => "user"
jdbc_password => "password"
schedule => "* * * * *"
statement => "SELECT * FROM employee"
use_column_value => true
tracking_column => "id"
}
}
output {
elasticsearch {
index => "logstash"
document_type => "sometype"
document_id => "%{uid}"
hosts => ["localhost:9200"]
}
}
It seems like the tracking_column (id) which you're using in the jdbc plugin and the document_id (uid) in the output is different. What if you have both of them same since it'll be easy to get all the records by id and push them into ES using the same id as well which could look more understandable:
document_id => "%{id}" <-- make sure you've got the exact spellings
And also please try adding this following line to your jdbc input after tracking_column:
tracking_column_type => "numeric"
Additionally to make sure that you don't have the .logstash_jdbc_last_run file existing when you're running the logstash file include the following line as well:
clean_run => true
So this is how your jdbc input should look like:
jdbc {
jdbc_driver_library => "mysql-connector-java-5.1.40-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
jdbc_user => "user"
jdbc_password => "password"
schedule => "* * * * *"
statement => "SELECT * FROM employee"
use_column_value => true
tracking_column => "id"
tracking_column_type => "numeric"
clean_run => true
}
Other than that the conf seems to be fine, unless you're willing to have :sql_last_value where if you only wanted to update the newly added records in your database table. Hope it helps!

Using an id of a table for sql_last_value in logstash?

I'm having a MySQL statement as such within my jdbc plugin in logstash input.
statement => "SELECT * from TEST where id > :sql_last_value"
My table doesn't have any date or datetime field as such. So I'm trying to update the index, by checking minute by minute using a scheduler, whether any new rows have been added to the table.
I should only be able to update the new records, rather than updating the existing value changes from an existing record. So to do this I'm having this kinda of a logstash input:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://myhostmachine:3306/mydb"
jdbc_user => "root"
jdbc_password => "root"
jdbc_validate_connection => true
jdbc_driver_library => "/mypath/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
schedule => "* * * * *"
statement => "SELECT * from mytable where id > :sql_last_value"
use_column_value => true
tracking_column => id
last_run_metadata_path => "/path/.logstash_jdbc_last_run"
clean_run => true
}
}
So whenever I create an index and run this logstash file in order to upload the docs, it doesn't get uploaded at all. The docs count shows as zero. I made sure that I deleted the .logstash_jdbc_last_run before I ran the logstash conf file.
Part of logstash console output:
[2016-11-02T16:33:00,294][INFO ][logstash.inputs.jdbc ]
(0.002000s) SELECT count(*) AS count FROM (SELECT * from TEST where
id > '2016-11-02 11:02:00') AS t1 LIMIT 1
and this keeps on going by checking minute by minute which is correct but then it doesn't get the records. How does it work?
Am I missing something? Any help could be appreciated.
You need to modify your logstash configuration like this:
jdbc {
jdbc_connection_string => "jdbc:mysql://myhostmachine:3306/mydb"
jdbc_user => "root"
jdbc_password => "root"
jdbc_validate_connection => true
jdbc_driver_library => "/mypath/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
schedule => "* * * * *"
statement => "SELECT * from TEST where id > :sql_last_value"
use_column_value => true
tracking_column => "id"
tracking_column_type => "numeric"
clean_run => true
last_run_metadata_path => "/mypath/.logstash_jdbc_last_run"
}
The last five settings are important in your case. Also make sure to delete the .logstash_jdbc_last_run file even though clean_run => true does it.

Connect to multiple databases dynamically using Logstash JDBC Input plugin

I am using Logstash JDBC input plugin to read data from database and index it into Elastic Search.
I have separate database for each customer and I want to connect to them one by one dynamically to fetch data?
Is there any provision or parameter in JDBC-Input Plugin or Logstash to connect to multiple databases?
e.g
input {
jdbc {
jdbc_driver_library => "mysql-connector-java-5.1.36-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/MYDB"
//MYDB will be set dynamically.
jdbc_user => "mysql"
parameters => { "favorite_artist" => "Beethoven" }
schedule => "* * * * *"
statement => "SELECT * from songs where artist = :favorite_artist"
}
}
Only solution I can think of is writing script that will update logstash config to connect to specified databases one by one and run logstash through it.
Let me update this -
for the same kind of purpose, I used two input JDBC sections, but only first section considered.
input {
jdbc {
jdbc_connection_string => "XXXX"
jdbc_user => "XXXX"
jdbc_password => "XXXX"
statement => "select * from product"
jdbc_driver_library => "/usr/share/logstash/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
}
jdbc {
jdbc_connection_string => "YYYY"
jdbc_user => "YYYYY"
jdbc_password => "YYYY"
statement => "select * from product"
jdbc_driver_library => "/usr/share/logstash/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
user => "XXX"
password => "XXXX"
index => "XXXX"
document_type => "XXXX"
}
}
--

Resources