Logstash Error: A plugin had an unrecoverable error - elasticsearch

A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Jdbc jdbc_connection_string=>"jdbc:mysql://dns/db", jdbc_user=>"root", jdbc_password=><password>, jdbc_driver_library=>"/home/ubuntu/mysql-connector-java-5.1.21.jar", jdbc_driver_class=>"com.mysql.jdbc.Driver", statement=>"SELECT * FROM table;", codec=><LogStash::Codecs::JSON id=>"json_ff05abb6-1b36-4ebf-aba1-1f8cf47a13a5", enable_metric=>true, charset=>"UTF-8">, id=>"93f23172918335b7f06ba3f8ee201c0b78f2c8e2-1", enable_metric=>true, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 00:00:00 UTC}, last_run_metadata_path=>"/home/ubuntu/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>
Error: undefined method `close_jdbc_connection' for #<Sequel::JDBC::Database:0x745d6c19>
logstash-5.5.0
conf file
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://dns:3306/stats"
jdbc_user => "root"
jdbc_password => "sdf"
#jdbc_validate_connection => true
jdbc_driver_library => "/home/ubuntu/mysql-connector-java-5.1.42/mysql-connector-java-5.1.42-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM table;"
#codec => "json"
}
}
output {
elasticsearch {
index => "mysqltest"
document_type => "mysqltest_type"
document_id => "%{id}"
hosts => "dns:80"
}
}
What is that about ? How can I solve this ?

Related

Trying to get the data from oracle database through logstash but data is not coming to elasticsearch

I am trying to get the data of oracle database through logstash but data is not coming to elasticsearch. I am not sure where I missed it. I didn't see any error on logstash log file. Below are the logstash conf file.
input {
jdbc {
jdbc_validate_connection => "true"
jdbc_connection_string => "jdbc:oracle:thin:#//server:1521/db"
jdbc_user => "user"
jdbc_password => "pass"
jdbc_driver_library => "/etc/logstash/files/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_paging_enabled => "true"
schedule => "* * * * *"
statement_filepath => "/etc/logstash/files/keycount.sql"
use_column_value => "true"
tracking_column => "timestamp"
last_run_metadata_path => "/etc/logstash/files/.logstash_jdbc_last_run"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "keyinventory-%{+YYYY}"
}
stdout{
codec => rubydebug
}
}
Please, someone, help me.

Error in JDBC connection using logstash

I am trying to get my sqlserver table into Elasticsearch using Logstash. For that i have created below configuration file.
input {
jdbc {
jdbc_connection_string => "jdbc:sqlserver://xxx.xxx.x.xxx:1433/DB_name"
jdbc_user => "devuser"
jdbc_password => "devuser"
jdbc_driver_library => "D:/Mssqljdbc/sqljdbc4-2.0.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
statement => "SELECT * FROM sample"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => "localhost"
index => "testmigrate"
document_type => "data"
}
}
then i am using bin\logstash -f sqltable.conf to execute it.
But i am getting
Error: Java::ComMicrosoftSqlserverJdbc::SQLServerException: The port
number 1433/DB_name is not valid.
i checked i am able to ping the particular ip address and the port is also openbut still i am getting the same error. Please help
After a bit of digging i did a small change and it worked for me. I added databaseName in front of the DB_name.
input {
jdbc {
jdbc_connection_string => "jdbc:sqlserver://xxx.xxx.x.xxx:1433;databaseName=DB_name"
jdbc_user => "devuser"
jdbc_password => "devuser"
jdbc_driver_library => "D:/Mssqljdbc/sqljdbc4-2.0.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
statement => "SELECT * FROM sample"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => "localhost"
index => "testmigrate"
document_type => "data"
}
}
It is quite strange i didn't found this in any of the documentation.

Exception: LogStash::ConfigurationError

Am trying to connect Oracle database via logstash and am getting below error.
Error: oracle.jdbc.OracleDriver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Exception: LogStash::ConfigurationError
Stack: D:/softwares/logstash-6.2.4/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:162:in `open_jdbc_connection'
Please find my logstash config file :
input {
jdbc {
jdbc_driver_library => "D:\data\ojdbc14.jar"
jdbc_driver_class => "oracle.jdbc.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#localhost:1521:xe"
jdbc_user => "user_0ne"
jdbc_password => "xxxyyyzzz"
statement => "SELECT * FROM PRODUCT"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "my_index"
}
}
logstash config file : (corrected)
input {
jdbc {
jdbc_driver_library => "D:\Karthikeyan\data\ojdbc14.jar"
jdbc_driver_class => "Java::oracle.jdbc.OracleDriver" // problem in this line is corrected
jdbc_connection_string => "jdbc:oracle:thin:#localhost:1521:xe"
jdbc_user => "vb"
jdbc_password => "123456"
statement => "SELECT * FROM VB_PRODUCT"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "my_index"
}
}
You can validate the configuration file using,
./user/share/logstash/bin/logstash -f etc/logstash/conf.d/sample.conf --config.test_and_exit

Logstash 6.2.3 Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type

I'm getting this warning when i start my logstash with the given below config.
if type has been removed then how to map multiple jdbc inputs to seaparate indices called "agency" and "subscriber". how to define output to elastic search.
input {
jdbc {
jdbc_driver_library => "mysql-connector-java-5.1.44.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/dbname"
jdbc_user => "XXXX"
jdbc_password => "XXXX"
jdbc_paging_enabled => "true"
jdbc_fetch_size => 500
lowercase_column_names => "false"
schedule => "* * * * * *"
last_run_metadata_path => "\RunConfig\logpos\agency_last_run"
statement_filepath => "\RunConfig\sql\agency.sql"
type => "agencydetails"
}
jdbc {
type => "subscriberdetails"
jdbc_driver_library => "mysql-connector-java-5.1.44.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/dbname"
jdbc_user => "XXXX"
jdbc_password => "XXXX"
jdbc_paging_enabled => "true"
jdbc_fetch_size => 500
lowercase_column_names => "false"
schedule => "* * * * * *"
last_run_metadata_path => "RunConfig\logpos\subscriber_last_run"
statement_filepath => "\RunConfig\sql\subscriber.sql"
}
}
You can use two separate file configurations for defining two pipelines: each pipeline will fetch from only one input JDBC, inserting in the defined index: in this case, you will need of two instances running of logstash.
Otherwise, you can use also one instance, using if then else to route the data in the preferred index:
output
{
if [type] == "agencydetails"
{
elasticsearch
{
hosts => "localhost:9200"
user => "xxx"
password => "xxx"
index => "agencydetails"
}
}
else
{
elasticsearch
{
hosts => "localhost:9200"
user => "xxx"
password => "xxx"
index => "subscriberdetails"
}
}
}

multiple inputs on logstash jdbc

I am using logstash jdbc to keep the things syncd between mysql and elasticsearch. Its working fine for one table. But now I want to do it for multiple tables. Do I need to open multiple in terminal
logstash agent -f /Users/logstash/logstash-jdbc.conf
each with a select query or do we have a better way of doing it so we can have multiple tables being updated.
my config file
input {
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table1"
}
}
output {
elasticsearch {
index => "testdb"
document_type => "table1"
document_id => "%{table_id}"
hosts => "localhost:9200"
}
}
You can definitely have a single config with multiple jdbc input and then parametrize the index and document_type in your elasticsearch output depending on which table the event is coming from.
input {
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table1"
type => "table1"
}
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table2"
type => "table2"
}
# add more jdbc inputs to suit your needs
}
output {
elasticsearch {
index => "testdb"
document_type => "%{type}" # <- use the type from each input
hosts => "localhost:9200"
}
}
This will not create duplicate data. and compatible logstash 6x.
# YOUR_DATABASE_NAME : test
# FIRST_TABLE : place
# SECOND_TABLE : things
# SET_DATA_INDEX : test_index_1, test_index_2
input {
jdbc {
# The path to our downloaded jdbc driver
jdbc_driver_library => "/mysql-connector-java-5.1.44-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# Postgres jdbc connection string to our database, YOUR_DATABASE_NAME
jdbc_connection_string => "jdbc:mysql://localhost:3306/test"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => ""
schedule => "* * * * *"
statement => "SELECT #slno:=#slno+1 aut_es_1, es_qry_tbl.* FROM (SELECT * FROM `place`) es_qry_tbl, (SELECT #slno:=0) es_tbl"
type => "place"
add_field => { "queryFunctionName" => "getAllDataFromFirstTable" }
use_column_value => true
tracking_column => "aut_es_1"
}
jdbc {
# The path to our downloaded jdbc driver
jdbc_driver_library => "/mysql-connector-java-5.1.44-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# Postgres jdbc connection string to our database, YOUR_DATABASE_NAME
jdbc_connection_string => "jdbc:mysql://localhost:3306/test"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => ""
schedule => "* * * * *"
statement => "SELECT #slno:=#slno+1 aut_es_2, es_qry_tbl.* FROM (SELECT * FROM `things`) es_qry_tbl, (SELECT #slno:=0) es_tbl"
type => "things"
add_field => { "queryFunctionName" => "getAllDataFromSecondTable" }
use_column_value => true
tracking_column => "aut_es_2"
}
}
# install uuid plugin 'bin/logstash-plugin install logstash-filter-uuid'
# The uuid filter allows you to generate a UUID and add it as a field to each processed event.
filter {
mutate {
add_field => {
"[#metadata][document_id]" => "%{aut_es_1}%{aut_es_2}"
}
}
uuid {
target => "uuid"
overwrite => true
}
}
output {
stdout {codec => rubydebug}
if [type] == "place" {
elasticsearch {
hosts => "localhost:9200"
index => "test_index_1_12"
#document_id => "%{aut_es_1}"
document_id => "%{[#metadata][document_id]}"
}
}
if [type] == "things" {
elasticsearch {
hosts => "localhost:9200"
index => "test_index_2_13"
document_id => "%{[#metadata][document_id]}"
# document_id => "%{aut_es_2}"
# you can set document_id . otherwise ES will genrate unique id.
}
}
}
If you need to run more than one pipeline in the same process, Logstash provides a way to do this through a configuration file called pipelines.yml and using multiple pipelines
multiple pipeline
Using multiple pipelines is especially useful if your current configuration has event flows that don’t share the same inputs/filters and outputs and are being separated from each other using tags and conditionals.
more helpfull resource

Resources