Logstash 'com.mysql.jdbc.Driver' not loaded - jdbc

I have a problem with jdbc_driver_library. I'm using ELK_VERSION = 6.4.2 and I use Docker for ELK.
When I run:
/opt/logstash# bin/logstash -f /etc/logstash/conf.d/mysql.conf
I'm getting an error:
error: com.mysql.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Driver path:
root#xxxxxxx:/etc/logstash/conectors# ls
mysql-connector-java-8.0.12.jar
root#xxxxxxxxxx:/etc/logstash/conectors#
mysql.conf:
input {
jdbc {
jdbc_driver_library => "/etc/logstash/conectors/mysql-connector-java-8.0.12.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
jdbc_user => "demouser"
jdbc_password => "demopassword"
statement => "SELECT id,name,city from ads"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => 'test'
document_type => 'tes'
document_id => '%{id}'
hosts => ['http://localhost:9200']
}
}
The whole error:
root#xxxxx:/opt/logstash# bin/logstash -f /etc/logstash/conf.d/mysql.conf
Sending Logstash logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-11-10T09:03:22,081][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-10T09:03:23,628][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.2"}
[2018-11-10T09:03:30,482][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-10T09:03:31,479][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-10T09:03:31,928][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-11-10T09:03:32,067][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-10T09:03:32,076][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-11-10T09:03:32,154][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-11-10T09:03:32,210][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-11-10T09:03:32,267][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-11-10T09:03:32,760][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x202f727c run>"}
[2018-11-10T09:03:32,980][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-10T09:03:33,877][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-10T09:03:34,315][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Jdbc jdbc_user=>"demouser", jdbc_password=><password>, statement=>"SELECT id,name,city from ads", jdbc_driver_library=>"/etc/logstash/conectors/mysql-connector-java-8.0.12.jar", jdbc_connection_string=>"jdbc:mysql://localhost:3306/mydb", id=>"233c4411c2434e93444c3f59eb9503f3a75cab4f85b0a947d96fa6773dac56cd", jdbc_driver_class=>"com.mysql.jdbc.Driver", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_cf5ab80c-91e4-4bc4-8d20-8c5a0f9f8077", enable_metric=>true, charset=>"UTF-8">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 00:00:00 +0000}, last_run_metadata_path=>"/root/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>
Error: com.mysql.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Exception: LogStash::ConfigurationError
Stack: /opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:163:in `open_jdbc_connection'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:221:in `execute_statement'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:277:in `execute_query'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:263:in `run'
/opt/logstash/logstash-core/lib/logstash/pipeline.rb:409:in `inputworker'
/opt/logstash/logstash-core/lib/logstash/pipeline.rb:403:in `block in start_input'
When I build an image and use docker run, I get another error:
[2018-11-10T10:32:52,935][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2018-11-10T10:32:52,966][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
[2018-11-10T10:32:54,509][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
Same problem when I use PostgreSQL.
psql.conf
input {
jdbc {
type => 'test'
jdbc_driver_library => '/etc/logstash/postgresql-9.1-901-1.jdbc4.jar'
jdbc_driver_class => 'org.postgresql.Driver'
jdbc_connection_string => 'jdbc:postgresql://localhost:5432/mytestdb'
jdbc_user => 'postgres'
jdbc_password => 'xxxxxx'
jdbc_page_size => '50000'
statement => 'SELECT id, name, city FROM ads'
}
}
Then I run:
/opt/logstash# bin/logstash -f /etc/logstash/conf.d/psql.conf
Error:
error: org.postgresql.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?

I got the same issue and the bellow solution fixed my issue .
for logstash 6.2.x and above, add the required drivers under:
logstash_install_dir/logstash-core/lib/jars/
and don't provide any driver path in config file.

I solved the problem:
First check your java version:
root#xxxxxx:/# java -version
openjdk version "1.8.0_181"
If you are using 1.8 then you should use the JDBC42 version.
If you are using 1.7 then you should use the JDBC41 version.
If you are using 1.6 then you should use the JDBC43 version.
Postgres setup:
postgresql-9.4-1203.jdbc42.jar
jdbc_driver_library => '/path_to_jar/postgresql-9.4-1203.jdbc42.jar'
jdbc_driver_class => 'org.postgresql.Driver'
MySQL setup:
mysql-connector-java-5.1.46.jar
jdbc_driver_library => "//path_to_jar/mysql-connector-java-5.1.46.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"

In MySQL 8 that you're using, the JDBC driver was renamed from com.mysql.jdbc.Driver to com.mysql.cj.jdbc.Driver (see the release notes for details). Just update your jdbc_driver_class configuration and you should be OK.

I had a similar issue, though, I had a different setting: I'm using a virtual machine not a Docker image. The issue was solved by installing OpenJDK 8 and setting it as the Default Java Version on my Ubuntu Server Virtual Machine.
https://linuxize.com/post/install-java-on-ubuntu-18-04/
Hope this helps!
EDIT : And before that, I had to change the authentication method of the root user from auth_socket to mysql_native_password
https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-18-04

Related

Logstash multiple scheduled piplines

So Im want to configure multiple logstash .conf files (pipelines) with different schedule configuration for example in have file1.conf with is configuration :
file1.conf
input {
jdbc { jdbc_connection_string=>'jdbc:mysql://x.x.x.x:3306/<databasename>'
# The user we wish to execute our statement as
jdbc_user => '****'
jdbc_password => '*****'
# The path to our downloaded jdbc driver
jdbc_driver_library => 'mysql-connector-java-5.1.49.jar'
jdbc_driver_class => 'com.mysql.jdbc.Driver'
schedule => "30 * * * *"
# our query
statement => "select * from elasticsync2 "
}
filter {
grok {
match => {"date" => ["%{DATE:date_format}"]}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ['https://x.x.x.x:9200/','https://x.x.x.x:9200/']
index => '******'
user => "*********"
password => "*************"
cacert => "./certs/ca.crt"
ssl_certificate_verification => true
document_type => "data"
}
}
the second pipeline
file2.conf
input {
jdbc { jdbc_connection_string=>'jdbc:mysql://x.x.x.x:3306/<databasename>'
# The user we wish to execute our statement as
jdbc_user => '****'
jdbc_password => '*****'
# The path to our downloaded jdbc driver
jdbc_driver_library => 'mysql-connector-java-5.1.49.jar'
jdbc_driver_class => 'com.mysql.jdbc.Driver'
schedule => "20 * * * *"
# our query
statement => "select * from elasticsync "
}
filter {
grok {
match => {"date" => ["%{DATE:date_format}"]}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ['https://x.x.x.x:9200/','https://x.x.x.x:9200/']
index => '******'
user => "*********"
password => "*************"
cacert => "./certs/ca.crt"
ssl_certificate_verification => true
document_type => "data"
}
}
piplines.yml
- pipeline.id: first-pip
path.config: "/etc/logstash/conf.d/file1.conf"
queue.type: persisted
- pipeline.id: second-pip
path.config: "/etc/logstash/conf.d/file2.conf"
logstash logs
[2022-01-13T20:44:21,652][INFO ][logstash.outputs.elasticsearch][second-pip] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://x.x.x.x:9200/", "https://x.x.x.x:9200/"]}
[2022-01-13T20:44:21,656][INFO ][logstash.outputs.elasticsearch][first-pip] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://x.x.x.x:9200/", "https://x.x.x.x:9200/"]}
[2022-01-13T20:44:22,647][INFO ][logstash.outputs.elasticsearch][first-pip] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_writer:xxxxxx#x.x.x.x:9200/, https://logstash_writer:xxxxxx#x.x.x.x:9200/]}}
[2022-01-13T20:44:22,647][INFO ][logstash.outputs.elasticsearch][second-pip] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_writer:xxxxxx#x.x.x.x:9200/, https://logstash_writer:xxxxxx#x.x.x.x:9200/]}}
[2022-01-13T20:44:23,811][WARN ][logstash.outputs.elasticsearch][second-pip] Restored connection to ES instance {:url=>"https://logstash_writer:xxxxxx#x.x.x.x:9200/"}
[2022-01-13T20:44:23,812][WARN ][logstash.outputs.elasticsearch][first-pip] Restored connection to ES instance {:url=>"https://logstash_writer:xxxxxx#x.x.x.x:9200/"}
[2022-01-13T20:44:23,938][INFO ][logstash.outputs.elasticsearch][second-pip] Elasticsearch version determined (7.14.0) {:es_version=>7}
[2022-01-13T20:44:23,946][WARN ][logstash.outputs.elasticsearch][second-pip] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-01-13T20:44:23,938][INFO ][logstash.outputs.elasticsearch][first-pip] Elasticsearch version determined (7.14.0) {:es_version=>7}
[2022-01-13T20:44:23,951][WARN ][logstash.outputs.elasticsearch][first-pip] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-01-13T20:44:24,245][WARN ][logstash.outputs.elasticsearch][first-pip] Restored connection to ES instance {:url=>"https://logstash_writer:xxxxxx#x.x.x.x:9200/"}
[2022-01-13T20:44:24,247][WARN ][logstash.outputs.elasticsearch][second-pip] Restored connection to ES instance {:url=>"https://logstash_writer:xxxxxx#x.x.x.x:9200/"}
[2022-01-13T20:44:24,602][INFO ][logstash.outputs.elasticsearch][first-pip] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-01-13T20:44:24,606][INFO ][logstash.outputs.elasticsearch][second-pip] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-01-13T20:44:25,004][INFO ][logstash.javapipeline ][first-pip] Starting pipeline {:pipeline_id=>"first-pip", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/file1.conf"], :thread=>"#<Thread:0x5fda836f#/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
[2022-01-13T20:44:25,004][INFO ][logstash.javapipeline ][second-pip] Starting pipeline {:pipeline_id=>"second-pip", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/file2.conf"], :thread=>"#<Thread:0x84fc1d4 run>"}
[2022-01-13T20:44:27,912][INFO ][logstash.javapipeline ][second-pip] Pipeline Java execution initialization time {"seconds"=>2.9}
[2022-01-13T20:44:27,914][INFO ][logstash.javapipeline ][first-pip] Pipeline Java execution initialization time {"seconds"=>2.9}
[2022-01-13T20:44:28,025][INFO ][logstash.javapipeline ][second-pip] Pipeline started {"pipeline.id"=>"second-pip"}
[2022-01-13T20:44:28,026][INFO ][logstash.javapipeline ][first-pip] Pipeline started {"pipeline.id"=>"first-pip"}
the main problem is that when I run systemctl start logstash
it only execute the first one which is first-pip
I want to know how to synchronize schedules and pipelines to run in parallel.

Unable to ingest XML file into Elastic Search using Logstash XML filter

I have this XML file which I stored in D:\ in Window 10:
<?xml version="1.0" encoding="UTF-8"?>
<root>
<ChainId>7290027600007</ChainId>
<SubChainId>001</SubChainId>
<StoreId>001</StoreId>
<BikoretNo>9</BikoretNo>
<DllVerNo>8.0.1.3</DllVerNo>
</root>
I have installed Elastic Search and able to access it at http://localhost:9200/. I have installed Logstash and created logstash-xml.conf to ingest the above XML file. In logstash-xml.conf, the configuration is below:
input
{
file
{
path => "D:\data.xml"
start_position => "beginning"
sincedb_path => "NUL"
exclude => "*.gz"
type => "xml"
codec => multiline {
pattern => "<?xml "
negate => "true"
what => "previous"
}
}
}
filter {
xml{
source => "message"
store_xml => false
target => "root"
xpath => [
"/root/ChainId/text()", "ChainId",
"/root/SubChainId/text()", "SubChainId",
"/root/StoreId/text()", "StoreId",
"/root/BikoretNo/text()", "BikoretNo",
"/root/DllVerNo/text()", "DllVerNo"
]
}
mutate {
gsub => [ "message", "[\r\n]", "" ]
}
}
output{
elasticsearch{
hosts => ["http://localhost:9200/"]
index => "parse_xml"
}
stdout
{
codec => rubydebug
}
}
When I run this configuration in the command line, I see this below:
D:\logstash\bin>logstash -f logstash-xml.conf
"Using bundled JDK: ""
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/C:/Users/CHEEWE~1.NGA/AppData/Local/Temp/jruby-3748/jruby14189572270520245744jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to D:/logstash/logs which is now configured via log4j2.properties
[2020-12-05T09:27:21,716][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.10.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [mswin32-x86_64]"}
[2020-12-05T09:27:22,053][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-12-05T09:27:24,031][INFO ][org.reflections.Reflections] Reflections took 37 ms to scan 1 urls, producing 23 keys and 47 values
[2020-12-05T09:27:26,083][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2020-12-05T09:27:26,311][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2020-12-05T09:27:26,378][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-12-05T09:27:26,383][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-12-05T09:27:26,437][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200/"]}
[2020-12-05T09:27:26,487][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2020-12-05T09:27:26,621][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-12-05T09:27:28,152][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["D:/logstash/bin/logstash-xml.conf"], :thread=>"#<Thread:0x65f57880 run>"}
[2020-12-05T09:27:29,176][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.02}
[2020-12-05T09:27:29,640][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-12-05T09:27:29,712][INFO ][filewatch.observingtail ][main][aca15cd3c6850472d105bd7b2b7a43da8ce8ec36a4b0b8c19830d898f1eb1109] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-05T09:27:29,726][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-12-05T09:27:30,020][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
But, going back to ElasticSearch, I can logstash index created, but I can't see the XML data loaded.

pushing data from logtsash to elasticserach

i had stored my configuration file of logstash in the same folder in which logstash is installed.
while trying to push the data from logstash to elasticsearch it is showing that server is started but data is not pushed to the elastic serach. how we can validate whether data is being pushed to elastic search or not.
this is my logstash configuration file.
input{
file{
path =>"C:\Elastic\GOOG.csv"
start_position =>"beginning"
}
}
filter{
csv{
columns =>
["date_of_record","open","high","low","close","volume","adj_close"]
separator => ","
}
date {
match => ["date_of_record","yyyy-MM-dd"]
}
mutate {
convert => ["open","float"]
convert => ["high","float"]
convert => ["low","float"]
convert => ["close","float"]
convert => ["volume","integer"]
convert => ["adj_close","float"]
}
}
output{
elasticsearch {
hosts => ["localhost:9200"]
index => "CSVGOGO"
}
}
Logstash Logs are:
c:\Elastic>.\logstash-7.0.0\bin\logstash -f .\gogo.conf
Sending Logstash logs to c:/Elastic/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-10-12T20:13:24,602][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-10-12T20:13:24,831][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-10-12T20:14:42,358][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-10-12T20:14:43,392][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-10-12T20:14:43,868][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-10-12T20:14:43,882][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-10-12T20:14:43,961][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-10-12T20:14:43,971][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-10-12T20:14:44,124][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x22517e24 run>"}
[2019-10-12T20:14:44,604][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-10-12T20:14:48,863][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"c:/Elastic/logstash-7.0.0/data/plugins/inputs/file/.sincedb_1eb0c3bd994c60a8564bc344e0f91452", :path=>["C:\\Elastic\\GOOG.csv"]}
[2019-10-12T20:14:48,976][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-10-12T20:14:49,319][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-10-12T20:14:49,331][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-10-12T20:14:52,244][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
The data will be pushed in ES only if the data flow happened through reader and processor correctly.
Input: Try to make sure that the file is correctly read by the input filter.
Filter: Try writing a ruby processor that prints what data if got from the input.
Output: Write output in the console too to make sure it's as per your expectation.
Also, you can start Logstash in debug mode to get more info.
For ELK stack- to test if data is pushed to ES and if you have installed kibana follow below process
Explanation->
1.optional- Add stdout in logstash pipeline to show what is going on.
stdout { codec => rubydebug }
2.mandatory- Add sincedb_path => "/dev/null" in input/ file pipeline.
Logstash has an interesting component or feature called sincedb. Logstash keeps track of where it was last reading a file before it crashed or stopped.
3.mandatory- index name should be in lowercase (csvgogo)
4.optional/mandatory- document_type => "csvfile" if you dont add then default will be 'logs'
So your logstash output pipeline may look like the following:-
input{
file{
path =>"C:\Elastic\GOOG.csv"
start_position =>"beginning"
sincedb_path => "/dev/null"
}
}
filter{
csv{
columns => ["date_of_record","open","high","low","close","volume","adj_close"]
separator => ","
}
date {
match => ["date_of_record","yyyy-MM-dd"]
}
mutate {
convert => ["open","float"]
convert => ["high","float"]
convert => ["low","float"]
convert => ["close","float"]
convert => ["volume","integer"]
convert => ["adj_close","float"]
}
}
output{
elasticsearch {
hosts => ["localhost:9200"]
index => "csvgogo"
document_type => "csvfile" #default 'logs'
}
}
1.try with kibana's dev tool('http://localhost:5601/app/kibana') option to run query-
GET /csvgogo/_search
{
"query": {
"match_all": {}
}
}
2.try with Chrome browser- 'http://localhost:9200/csvgogo/_search?pretty'
where 'csvgogo' is your ES index name.
it will show you the raw data on browser itself from elastic search.

How to create Index and Mapping into ES from LOGSTASH

I've been following this Tutorial for impport data from a DB into LOGSTASh and create a Idex and Mapping into Elastic Search
INSERT INTO LOGSTASH SELECT DATA FROM DATABASE
This is my OUTPUT based on my Configurations file:
[2017-10-12T11:50:45,807][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"C:/Users/Bruno/Downloads/logstash-5.6.2/logstash-5.6.2/modules/fb_apache/configuration"}
[2017-10-12T11:50:45,812][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"C:/Users/Bruno/Downloads/logstash-5.6.2/logstash-5.6.2/modules/netflow/configuration"}
[2017-10-12T11:50:46,518][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-10-12T11:50:46,521][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-10-12T11:50:46,652][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2017-10-12T11:50:46,654][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-10-12T11:50:46,716][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date", "include_in_all"=>false}, "#version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-10-12T11:50:46,734][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2017-10-12T11:50:46,749][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-10-12T11:50:47,053][INFO ][logstash.pipeline ] Pipeline main started
[2017-10-12T11:50:47,196][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-10-12T11:50:47,817][INFO ][logstash.inputs.jdbc ] (0.130000s) SELECT * from EP_RDA_STRING
[2017-10-12T11:50:53,095][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
Everything seems OK, at least I think. Except the fact that querying the ES server to OUTPUT indexes and Mappings, I have it Empty.
http://localhost:9200/_all/_mapping
{}
http://localhost:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
this is my File Config:
input {
jdbc {
# sqlserver jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:sqlserver://localhost:1433;databaseName=RDA; integratedSecurity=true;"
# The user we wish to execute our statement as
jdbc_user => ""
# The path to our downloaded jdbc driver
jdbc_driver_library => "C:\mypath\sqljdbc_6.2\enu\mssql-jdbc-6.2.1.jre8.jar"
# The name of the driver class for Postgresql
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
# our query
statement => "SELECT * from EP_RDA_STRING"
}
}
output {
elasticsearch {
index => "RDA"
document_type => "RDA_string_view"
document_id => "%{ndb_no}"
hosts => "localhost:9200"
}
}
Which version of logstash are you using? What is the command that you are using to start the logstash? Make sure that the input and output blocks resemble the one that is given below
input {
beats {
port => "29600"
type => "weblogic-server"
}
}
filter {
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}

Logstash: Error: mongodb.jdbc.MongoDriver not loaded

I am getting below error while using Mongodb Java Driver to ready data from MongoDB and push it to ElasticSearch-
Error: mongodb.jdbc.MongoDriver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Plateform Info:
OS- RHEL 6.6
Logstash- 5.5.0
Elasticsearch- 5.5.0
Mongodb- 3.2.13
Jars- mongodb-driver-core-3.4.2.jar, mongo-java-driver-3.4.2.jar and bson-3.4.2.jar
Logstash config
input{
jdbc{
jdbc_driver_library => "/home/pdwiwe/logstash-5.5.0/bin/mongo-java-driver-3.4.2.jar"
jdbc_driver_class => "mongodb.jdbc.MongoDriver"
jdbc_connection_string => "jdbc:mongo://hostname:27017?authSource=admin"
jdbc_user => "user"
jdbc_password => "pwd"
statement => "select * from system.users"
}
}
output {
if "_grokparsefailure" not in [tags]{
elasticsearch {
hosts => [ "localhost:9200" ]
index => "mongodb-data"
}
}
}
Logstash Service Start:
/home/pdwiwe/logstash-5.5.0/bin$ sh logstash -f mongo.conf
mongodb.jdbc.MongoDriver is not a Driver class in the mongo-java-driver.
AFAIK - this driver does not support JDBC
Various JDBC drivers have wrapped the mongo-java-driver such as Unity, Simba, DbSchema

Resources