I am trying to get data from mongodb to elastic using logstash
but i get below errors:
Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::OrgLogstash::Missing
ConverterException:
below is my config file:
input{
jdbc{
jdbc_driver_library => "D:/mongojdbc1.2.jar"
jdbc_driver_class => "com.dbschema.MongoJdbcDriver"
jdbc_connection_string => "jdbc:mongodb://localhost:27017/users"
jdbc_user => ""
jdbc_validate_connection => true
statement => "db.user_details.find({})"
}
}
output {
elasticsearch {
hosts => 'http://localhost:9200'
index => 'person_data'
document_type => "person_data"
}
stdout { codec => rubydebug }
}
This possibly happens because certain data type in Mongo might not be convertible to data type in Elasticsearch. May be you should try to select few columns and then see which one is failing.
Related
Rejecting mapping update to [db] as the final mapping would have more than 1 type: [meeting_invities, meetingroom
"}}}}
below is my logstatsh-mysql.conf I have to use multiple table in jdbc input. Please advise
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/db"
jdbc_user => "root"
jdbc_password => "pwd"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM meeting"
tags => "dat_meeting"
}
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/db"
jdbc_user => "root"
jdbc_password => "pwd"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM meeting_invities;"
tags => "dat_meeting_invities"
}
}
output {
stdout { codec => json_lines }
if "dat_meeting" in [tags]{elasticsearch {
hosts => "localhost:9200"
index => "meetingroomdb"
document_type => "meeting"
}
}
if "dat_meeting_invities" in [tags]{elasticsearch {
hosts => "localhost:9200"
index => "meetingroomdb"
document_type => "meeting_invities"
}
}
}
Your elasticsearch output uses the option document_type with two different values, this option sets the _type field of the index, since version 6.X you can only have one type in your index.
This option is deprecated in version 7.X and will be removed in future versions, since elasticsearch is going typeless.
Since elasticsearch won't allow you to index more than one index type, You need to see what is the type of the first document you indexed, this is the type that elasticsearch will use for any future document, use this in both document_type option.
Im using the Below JDBC code in Logstash for updating the already existing index in Elasticsearch, without duplicating rows or adding the updated row as another new row.
Versions: Elasticsearch, Logstash and Kibana are v7.1.0.
input {
jdbc {
jdbc_connection_string => "jdbc:sqlserver://DB01:1433;databasename=testdb;integratedSecurity=true"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_driver_library => "C:\Program Files\sqljdbc_6.2\enu\mssql-jdbc-6.2.2.jre8.jar"
jdbc_user => nil
statement => "SELECT * from data WHERE updated_on > :sql_last_value ORDER BY updated_on"
use_column_value =>true
tracking_column =>updated_on
tracking_column_type => "timestamp"
}
}
output {
elasticsearch { hosts => ["localhost:9200"]
index => "datau"
action=>update
document_id => "%{id}"
doc_as_upsert =>true}
stdout { codec => rubydebug }
}
when i run the above in logstash (logstash -f myfile.conf)
the below error appars.
[2019-08-21T10:46:33,864][ERROR][logstash.outputs.elasticsearch] Failed to insta ll template. {:message=>"Got response code '400' contacting Elasticsearch at URL 'http://localhost:9200/_template/logstash'", :class=>"LogStash::Outputs::Elasti cSearch::HttpClient::Pool::BadResponseCodeError", :backtrace=>["D:/ELK 6.4.0/log stash-6.4.0/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasti csearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adap ter.rb:80:in `perform_request'", "D:/ELK 6.4.0/logstash-6.4.0/logstash-6.4.0/ven dor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstas h/outputs/elasticsearch/http_client/pool.rb:291:in `perform_request_to_url'"...
Where am i gong wrong?
Remove } from this line:
doc_as_upsert =>true}
The issue was with the version compatibility.
I had used 6.4 logstash on 7.1 Elasticsearch.
Once my logstash was upgraded, the issue was resolved.
Thanks!
I'm trying to import a mysql table into elasticsearch via logstash. One column is of the type "varbinary" which causes the following error:
[2018-10-10T12:35:54,922][ERROR][logstash.outputs.elasticsearch] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>"\"\\xC3\" from ASCII-8BIT to UTF-8", :error_class=>"LogStash::Json::GeneratorError", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/json.rb:27:in `jruby_dump'", "/usr/share/logstash/vendor/$
My logstash config:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/xyz"
# The user we wish to execute our statement as
jdbc_user => "test"
jdbc_password => "test"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/mysql-connector-java-5.1.47/mysql-connector-java-5.1.47.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT * FROM x"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "x"
"document_type" => "data"
}
}
How can I convert the varbinary to uft-8? Do I have to use a special filter?
Alright...after spending hours on this I found the solution right after posting this question:
columns_charset => { "column0" => "UTF8" }
Try using optional in connection string ( characterEncoding=utf8 )
jdbc_connection_string => "jdbc:mysql://localhost:3306/xyz?useSSL=false&useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&autoReconnect=true"
I am trying to get my sqlserver table into Elasticsearch using Logstash. For that i have created below configuration file.
input {
jdbc {
jdbc_connection_string => "jdbc:sqlserver://xxx.xxx.x.xxx:1433/DB_name"
jdbc_user => "devuser"
jdbc_password => "devuser"
jdbc_driver_library => "D:/Mssqljdbc/sqljdbc4-2.0.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
statement => "SELECT * FROM sample"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => "localhost"
index => "testmigrate"
document_type => "data"
}
}
then i am using bin\logstash -f sqltable.conf to execute it.
But i am getting
Error: Java::ComMicrosoftSqlserverJdbc::SQLServerException: The port
number 1433/DB_name is not valid.
i checked i am able to ping the particular ip address and the port is also openbut still i am getting the same error. Please help
After a bit of digging i did a small change and it worked for me. I added databaseName in front of the DB_name.
input {
jdbc {
jdbc_connection_string => "jdbc:sqlserver://xxx.xxx.x.xxx:1433;databaseName=DB_name"
jdbc_user => "devuser"
jdbc_password => "devuser"
jdbc_driver_library => "D:/Mssqljdbc/sqljdbc4-2.0.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
statement => "SELECT * FROM sample"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => "localhost"
index => "testmigrate"
document_type => "data"
}
}
It is quite strange i didn't found this in any of the documentation.
I want to import data from oracle and would like to pass one of the params of the imported data to elastic search to fetch some other details.
For ex:- If I have an Employee Id which I get from oracle db for say 100 rows , I want to pass all these 100 employee ids to elastic search and get the emp name and salary.
I am able to retrieve the data from oracle now but unable to connect to elastic search. Also I am not sure what will be a better approach to do this.
I am using log stash 2.3.3 and the elastic search log stash filter plugin.
input {
jdbc {
jdbc_connection_string => "jdbc:oracle:thin:#<dbhost>:<port>:<sid>"
# The user we wish to execute our statement as
jdbc_user => “user"
jdbc_password => “pass"
# The path to our downloaded jdbc driver
jdbc_driver_library => “<path>"
# The name of the driver class for oracle
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
# our query
statement => "SELECT empId, desg from Employee"
}
elasticsearch {
hosts => "https://xx.corp.com:9200"
index => “empdetails”
}
}
output {
stdout { codec => json_lines }
}
I am getting the below error due to elastic search.
A plugin had an unrecoverable error. Will restart this plugin.
Plugin: ["https://xx.corp.com:9200"], index=>"empdetails ", query=>”empId:’1001'", codec=>"UTF-8">, scan=>true, size=>1000, scroll=>"1m", docinfo=>false, docinfo_target=>"#metadata", docinfo_fields=>["_index", "_type", "_id"], ssl=>false>
Error: [401] {:level=>:error}
You need to use the elasticsearch filter and not the elasticsearch input
input {
jdbc {
jdbc_connection_string => "jdbc:oracle:thin:#<dbhost>:<port>:<sid>"
# The user we wish to execute our statement as
jdbc_user => “user"
jdbc_password => “pass"
# The path to our downloaded jdbc driver
jdbc_driver_library => “<path>"
# The name of the driver class for oracle
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
# our query
statement => "SELECT empId, desg from Employee"
}
}
filter {
elasticsearch {
hosts => ["xx.corp.com:9200"]
query => "empId:%{empId}"
user => "admin"
password => "admin"
sort => "empName:desc"
fields => {
"empName" => "empName"
"salary" => "salary"
}
}
}
output {
stdout { codec => json_lines }
}
As a result, each record fetched via JDBC will be enriched by the corresponding data found in ES.