I am using logstash jdbc to keep the things syncd between mysql and elasticsearch. Its working fine for one table. But now I want to do it for multiple tables. Do I need to open multiple in terminal
logstash agent -f /Users/logstash/logstash-jdbc.conf
each with a select query or do we have a better way of doing it so we can have multiple tables being updated.
my config file
input {
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table1"
}
}
output {
elasticsearch {
index => "testdb"
document_type => "table1"
document_id => "%{table_id}"
hosts => "localhost:9200"
}
}
You can definitely have a single config with multiple jdbc input and then parametrize the index and document_type in your elasticsearch output depending on which table the event is coming from.
input {
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table1"
type => "table1"
}
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table2"
type => "table2"
}
# add more jdbc inputs to suit your needs
}
output {
elasticsearch {
index => "testdb"
document_type => "%{type}" # <- use the type from each input
hosts => "localhost:9200"
}
}
This will not create duplicate data. and compatible logstash 6x.
# YOUR_DATABASE_NAME : test
# FIRST_TABLE : place
# SECOND_TABLE : things
# SET_DATA_INDEX : test_index_1, test_index_2
input {
jdbc {
# The path to our downloaded jdbc driver
jdbc_driver_library => "/mysql-connector-java-5.1.44-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# Postgres jdbc connection string to our database, YOUR_DATABASE_NAME
jdbc_connection_string => "jdbc:mysql://localhost:3306/test"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => ""
schedule => "* * * * *"
statement => "SELECT #slno:=#slno+1 aut_es_1, es_qry_tbl.* FROM (SELECT * FROM `place`) es_qry_tbl, (SELECT #slno:=0) es_tbl"
type => "place"
add_field => { "queryFunctionName" => "getAllDataFromFirstTable" }
use_column_value => true
tracking_column => "aut_es_1"
}
jdbc {
# The path to our downloaded jdbc driver
jdbc_driver_library => "/mysql-connector-java-5.1.44-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# Postgres jdbc connection string to our database, YOUR_DATABASE_NAME
jdbc_connection_string => "jdbc:mysql://localhost:3306/test"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => ""
schedule => "* * * * *"
statement => "SELECT #slno:=#slno+1 aut_es_2, es_qry_tbl.* FROM (SELECT * FROM `things`) es_qry_tbl, (SELECT #slno:=0) es_tbl"
type => "things"
add_field => { "queryFunctionName" => "getAllDataFromSecondTable" }
use_column_value => true
tracking_column => "aut_es_2"
}
}
# install uuid plugin 'bin/logstash-plugin install logstash-filter-uuid'
# The uuid filter allows you to generate a UUID and add it as a field to each processed event.
filter {
mutate {
add_field => {
"[#metadata][document_id]" => "%{aut_es_1}%{aut_es_2}"
}
}
uuid {
target => "uuid"
overwrite => true
}
}
output {
stdout {codec => rubydebug}
if [type] == "place" {
elasticsearch {
hosts => "localhost:9200"
index => "test_index_1_12"
#document_id => "%{aut_es_1}"
document_id => "%{[#metadata][document_id]}"
}
}
if [type] == "things" {
elasticsearch {
hosts => "localhost:9200"
index => "test_index_2_13"
document_id => "%{[#metadata][document_id]}"
# document_id => "%{aut_es_2}"
# you can set document_id . otherwise ES will genrate unique id.
}
}
}
If you need to run more than one pipeline in the same process, Logstash provides a way to do this through a configuration file called pipelines.yml and using multiple pipelines
multiple pipeline
Using multiple pipelines is especially useful if your current configuration has event flows that don’t share the same inputs/filters and outputs and are being separated from each other using tags and conditionals.
more helpfull resource
Related
How to insert multiple table values into each table?
Using logstash, I want to put multiple tables as elasticsearch.
I used logstash several times using jdbc
but only one value is saved in one table.
I tried to answer the stackoverflow, but I couldn't solve it.
-> multiple inputs on logstash jdbc
This is my confile code.
This code is the code that I executed by myself.
input {
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/db_name?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from table_name1"
tracking_column => "table_name1"
use_column_value => true
clean_run => true
}
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/db_name?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from table_name2"
tracking_column => "table_name2"
use_column_value => true
clean_run => true
}
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/db_name?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from table_name3"
tracking_column => "table_name3"
use_column_value => true
clean_run => true
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "aws_05181830_2"
document_type => "%{type}"
document_id => "{%[#metadata][document_id]}"
}
stdout {
codec => rubydebug
}
}
problem
1. If you look at the picture, save only one value in one table
2. When a new table comes, the existing table value disappears.
My golas
How to save properly without duplicate data in each table?
You are setting the document_id of the document in elasticsearch using
document_id => "{%[#metadata][document_id]}"
This is not a valid sprintf reference, so it uses the literal value {%[#metadata][document_id]}. As a result, every document you index overwrites the previous document. I suggest you remove this option.
Is there any way I can configure logstash so that it picks up delta records real time automatically. If not then is there any opensource plugin/tool available to achieve this? Thanks for the help.
Try the below configuration for the MSSQL server. You need to schedule it like below by adding the schedule period, a statement which would the query to fetch the data from your database
input {
jdbc {
jdbc_connection_string => "jdbc:sqlserver://localhost:1433;databaseName=test"
# The user we wish to execute our statement as
jdbc_user => "sa"
jdbc_password => "sasa"
# The path to our downloaded jdbc driver
jdbc_driver_library => "C:\Users\abhijitb\.m2\repository\com\microsoft\sqlserver\mssql-jdbc\6.2.2.jre8\mssql-jdbc-6.2.2.jre8.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
#clean_run => true
schedule => "* * * * *"
#query
statement => "SELECT * FROM Student where studentid > :sql_last_value"
use_column_value => true
tracking_column => "studentid"
}
}
output {
#stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "student"
"document_type" => "data"
"document_id" => "%{studentid}"
}
}
I'm getting this warning when i start my logstash with the given below config.
if type has been removed then how to map multiple jdbc inputs to seaparate indices called "agency" and "subscriber". how to define output to elastic search.
input {
jdbc {
jdbc_driver_library => "mysql-connector-java-5.1.44.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/dbname"
jdbc_user => "XXXX"
jdbc_password => "XXXX"
jdbc_paging_enabled => "true"
jdbc_fetch_size => 500
lowercase_column_names => "false"
schedule => "* * * * * *"
last_run_metadata_path => "\RunConfig\logpos\agency_last_run"
statement_filepath => "\RunConfig\sql\agency.sql"
type => "agencydetails"
}
jdbc {
type => "subscriberdetails"
jdbc_driver_library => "mysql-connector-java-5.1.44.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/dbname"
jdbc_user => "XXXX"
jdbc_password => "XXXX"
jdbc_paging_enabled => "true"
jdbc_fetch_size => 500
lowercase_column_names => "false"
schedule => "* * * * * *"
last_run_metadata_path => "RunConfig\logpos\subscriber_last_run"
statement_filepath => "\RunConfig\sql\subscriber.sql"
}
}
You can use two separate file configurations for defining two pipelines: each pipeline will fetch from only one input JDBC, inserting in the defined index: in this case, you will need of two instances running of logstash.
Otherwise, you can use also one instance, using if then else to route the data in the preferred index:
output
{
if [type] == "agencydetails"
{
elasticsearch
{
hosts => "localhost:9200"
user => "xxx"
password => "xxx"
index => "agencydetails"
}
}
else
{
elasticsearch
{
hosts => "localhost:9200"
user => "xxx"
password => "xxx"
index => "subscriberdetails"
}
}
}
I've got two configuration files pointing each on a different table of the database but in elasticsearch, i get the two tables merged so, tablea and tableb items will all be copy as typea and tablea and tableb items will all be copy as typeb.
But i configured it to get tablea as typea and tableb as typeb.
Here's my configuration filea
input {
jdbc {
jdbc_driver_library => "/usr/share/logstash/mysql-connector-java-5.1.42-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://db:3306/nombase"
jdbc_user => "root"
jdbc_password => "root"
schedule => "* * * * *"
statement => "SELECT * FROM `tablecandelete`"
}
}
## Add your filters / logstash plugins configuration here
output {
stdout { codec => json_lines }
if [deleted] == 1 {
elasticsearch {
hosts => "elasticsearch:9200"
index => "nombase"
document_type => "tablecandelete"
## TO PREVENT DUPLICATE ITEMS
document_id => "tablecandelete-%{id}"
action => "delete"
}
} else {
elasticsearch {
hosts => "elasticsearch:9200"
index => "nombase"
document_type => "tablecandelete"
## TO PREVENT DUPLICATE ITEMS
document_id => "tablecandelete-%{id}"
}
}
}
Here's my configuration fileb
input {
jdbc {
jdbc_driver_library => "/usr/share/logstash/mysql-connector-java-5.1.42-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://db:3306/nombase"
jdbc_user => "root"
jdbc_password => "root"
schedule => "* * * * *"
statement => "SELECT * FROM `nomtable`"
}
}
## Add your filters / logstash plugins configuration here
output {
stdout { codec => json_lines }
elasticsearch {
hosts => "elasticsearch:9200"
index => "nombase"
document_type => "nomtable"
## TO PREVENT DUPLICATE ITEMS
document_id => "nomtable-%{id}"
}
}
Does anybody have an idea? Thanks for any response/help
While you do have two different configuration files, logstash doesn't treat them as independent -- you could have all of your inputs in one file, all of your outputs in another file.
To separate things out, you have to add type => something_unique on each of your inputs and then surround the remaining code in the config file with
if [type] == 'something_unique' {
// config here
}
I am using Logstash JDBC input plugin to read data from database and index it into Elastic Search.
I have separate database for each customer and I want to connect to them one by one dynamically to fetch data?
Is there any provision or parameter in JDBC-Input Plugin or Logstash to connect to multiple databases?
e.g
input {
jdbc {
jdbc_driver_library => "mysql-connector-java-5.1.36-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/MYDB"
//MYDB will be set dynamically.
jdbc_user => "mysql"
parameters => { "favorite_artist" => "Beethoven" }
schedule => "* * * * *"
statement => "SELECT * from songs where artist = :favorite_artist"
}
}
Only solution I can think of is writing script that will update logstash config to connect to specified databases one by one and run logstash through it.
Let me update this -
for the same kind of purpose, I used two input JDBC sections, but only first section considered.
input {
jdbc {
jdbc_connection_string => "XXXX"
jdbc_user => "XXXX"
jdbc_password => "XXXX"
statement => "select * from product"
jdbc_driver_library => "/usr/share/logstash/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
}
jdbc {
jdbc_connection_string => "YYYY"
jdbc_user => "YYYYY"
jdbc_password => "YYYY"
statement => "select * from product"
jdbc_driver_library => "/usr/share/logstash/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
user => "XXX"
password => "XXXX"
index => "XXXX"
document_type => "XXXX"
}
}
--