I stuck at this situation (logstash is not being shutdown). See the details below
Setup:-->
Elastic and logstash are hosted on same server. Filebeat is loaded on another server. Filebeat is sending the data to logstash and logstash is sending the data to elastic. Installed these as system service(Ubuntu).
Elastisearch Version --> 7.6.2
Logstash version --> 7.7.0
I have loaded 2 pipeline with logstash, checked the data is being sent to Elastic. Meanwhile I need to make a update in one of the pipeline, not able to stop(restart) the logstash.
As I checked, One of the pipeline have some issue and its data is not being forwarded to elastic. Now whenever I am about to shutdown/stop the logstash, it stucks/hang. (Don't know what to say).
Now what I have tried to shutdown/restart
systemctl stop/restart logstash
Ouput of logstash service status
systemctl status logstash
Output
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: enabled)
Active: deactivating (stop-sigterm) since Tue 2020-09-15 15:18:17 UTC; 19h ago
Main PID: 14298 (java)
Tasks: 40 (limit: 19141)
CGroup: /system.slice/logstash.service
└─14298 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true
Sep 15 15:30:26 Server logstash[14298]: [2020-09-15T15:30:26,238][INFO ][logstash.outputs.elasticsearch][pipeline1][8820ca4] retrying failed action with response code: 403 ({"type"=>"security_exception", "reason"=>"action [indices:admin/create] is unauthorized
Sep 15 15:30:26 Server logstash[14298]: [2020-09-15T15:30:26,239][INFO ][logstash.outputs.elasticsearch][pipeline1][88a920ca4] retrying failed action with response code: 403 ({"type"=>"security_exception", "reason"=>"action [indices:admin/create] is unauthorized
Sep 15 15:30:26 Server logstash[14298]: [2020-09-15T15:30:26,239][INFO ][logstash.outputs.elasticsearch][pipeline1][88a920ca4] retrying failed action with response code: 403 ({"type"=>"security_exception", "reason"=>"action [indices:admin/create] is unauthorized
Sep 15 15:30:26 Server logstash[14298]: [2020-09-15T15:30:26,240][INFO ][logstash.outputs.elasticsearch][pipeline1][88531a20ca4] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>3}
Sep 15 15:30:30 Server logstash[14298]: [2020-09-15T15:30:30,894][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"} ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"] id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"} {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"...
Sep 15 15:30:35 Server logstash[14298]: [2020-09-15T15:30:35,911][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0,
"stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"} ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"] id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"} {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"...
Other method tried:-->
pkill -9/-14 PID
but no luck.
I know there are some in-flight data which is preventing the logstash to shutdown. I have check this document from elastic but no help.
There is option mentioned allow.unsafe_shutdown and that option i haven't used.
Logstash logs:
[2020-09-15T15:30:40,928][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"], "id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>39, "name"=>"[dem_logging_test]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>40, "name"=>"[dem_logging_test]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2020-09-15T15:30:45,945][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"], "id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>39, "name"=>"[dem_logging_test]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>40, "name"=>"[dem_logging_test]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2020-09-15T15:30:50,963][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"], "id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>39, "name"=>"[dem_logging_test]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>40, "name"=>"[dem_logging_test]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2020-09-15T15:30:55,980][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"], "id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>39, "name"=>"[dem_logging_test]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>40, "name"=>"[dem_logging_test]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
I want to stop this logstash service, There is log which says dead url, no live connection, in-flight data but right now I want to just shut it down.
Any ideas? and Any ideas how can I avoid it in future.
Many thanks
Related
I'm a newbie in elk stack, and now I can't get what I expected in creation of index patterns. Here is the problem:
First, I created a logstash conf file:
input {
file {
path => ["/usr/share/logs_data/log_6.log"]
start_position => "beginning"
}
}
filter {
grok {
match => ["message", "(?<execution_date>\d{4}-\d{2}-\d{2}) (?<execution_time>\d{2}:\d{2}:\d{2})%{GREEDYDATA}/configs/pipelines/(?<crawler_category>([^/])+)/(?<crawler_subcategory>([^-])+)-(?<crawler_name>([^.])+).json"]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "log_6"
}
}
Then, I updated this file, log_6.log, three times:
First:
log_6.log
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)"
In kibana's Discover option: one hit
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:02:32.968 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:uEvZoHcBZfIv8WwqqBlB _type:_doc _index:log_6 _score:0
Second:
log_6.log
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)"
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)"
In kibana's Discover option: three hits
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:02:32.968 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:uEvZoHcBZfIv8WwqqBlB _type:_doc _index:log_6 _score:0
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:01.510 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:ukvaoHcBZfIv8Wwq-hrX _type:_doc _index:log_6 _score:0
crawler_category:8 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:01.515 crawler_subcategory:subcat7 crawler_name:my_file_8 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)" _id:u0vaoHcBZfIv8Wwq-hrd _type:_doc _index:log_6 _score:0
Third:
log_6.log
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)"
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)"
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/9/subcat9-my_file_9.json)"
In kibana's Discover option: five hits
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:02:32.968 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:uEvZoHcBZfIv8WwqqBlB _type:_doc _index:log_6 _score:0
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:01.510 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:ukvaoHcBZfIv8Wwq-hrX _type:_doc _index:log_6 _score:0
crawler_category:8 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:01.515 crawler_subcategory:subcat7 crawler_name:my_file_8 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)" _id:u0vaoHcBZfIv8Wwq-hrd _type:_doc _index:log_6 _score:0
crawler_category:9 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:33.595 crawler_subcategory:subcat9 crawler_name:my_file_9 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/9/subcat9-my_file_9.json)" _id:PEvboHcBZfIv8WwqeBsm _type:_doc _index:log_6 _score:0
crawler_category:8 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:33.595 crawler_subcategory:subcat7 crawler_name:my_file_8 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)" _id:PUvboHcBZfIv8WwqeBsn _type:_doc _index:log_6 _score:0
Why each time I update log_6.log file with a new log line index pattern gets hits for older entries? For example, why in the third log update it reads second line of log again?
Update: Here is logstash log
Attaching to docker-elk_logstash_1
logstash_1 | Using bundled JDK: /usr/share/logstash/jdk
logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby2302152547405294616jopenssl.jar) to field java.security.MessageDigest.provider
logstash_1 | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2021-02-14T14:02:20,574][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.10.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
logstash_1 | [2021-02-14T14:02:20,651][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash_1 | [2021-02-14T14:02:20,667][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash_1 | [2021-02-14T14:02:21,471][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"0c9439e7-3c67-4b5d-90f4-a99008772803", :path=>"/usr/share/logstash/data/uuid"}
logstash_1 | [2021-02-14T14:02:22,115][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2021-02-14T14:02:23,391][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
logstash_1 | [2021-02-14T14:02:24,743][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-02-14T14:02:25,089][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-02-14T14:02:25,202][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
logstash_1 | [2021-02-14T14:02:25,210][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-02-14T14:02:25,463][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
logstash_1 | [2021-02-14T14:02:25,465][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash_1 | [2021-02-14T14:02:28,676][INFO ][org.reflections.Reflections] Reflections took 49 ms to scan 1 urls, producing 23 keys and 47 values
logstash_1 | [2021-02-14T14:02:29,562][WARN ][deprecation.logstash.outputs.elasticsearchmonitoring] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
logstash_1 | [2021-02-14T14:02:29,798][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-02-14T14:02:29,800][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-02-14T14:02:29,881][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-02-14T14:02:29,892][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-02-14T14:02:29,923][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
logstash_1 | [2021-02-14T14:02:29,924][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-02-14T14:02:29,930][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
logstash_1 | [2021-02-14T14:02:29,931][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-02-14T14:02:30,033][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2021-02-14T14:02:30,059][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
logstash_1 | [2021-02-14T14:02:30,069][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2021-02-14T14:02:30,254][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2021-02-14T14:02:30,344][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2021-02-14T14:02:30,367][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x524db6c0 run>"}
logstash_1 | [2021-02-14T14:02:30,537][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x7719a78e#/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>"}
logstash_1 | [2021-02-14T14:02:31,467][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.1}
logstash_1 | [2021-02-14T14:02:31,528][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.98}
logstash_1 | [2021-02-14T14:02:31,575][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2021-02-14T14:02:31,904][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_4665540243166de885448baafb9de578", :path=>["/usr/share/logs_data/log_6.log"]}
logstash_1 | [2021-02-14T14:02:31,941][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2021-02-14T14:02:32,064][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2021-02-14T14:02:32,088][INFO ][filewatch.observingtail ][main][1564967280b5861f1e98faa762e5f84d80b5a693bf98ecd54f18d1bcfc26ea2d] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2021-02-14T14:02:32,906][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Update 2: Also, here is my sincedb file, after appending the three log lines:
18616173 0 2050 486 1613311473.59608 /usr/share/logs_data/log_6.log
18616174 0 2050 324 1613311441.515639
If I understood correctly, it is correct, because 324 is the quantity of characters before last file update and 486 is the quantity of characters after the file update.
I also checked the elastic user interface in localhost port, for index log_6, in Index Management session:
It is saying that 5 as Docs Count value, which makes me think if the data is reaching Kibana the right way...
I have set-up a ELK but I see elasticsearch not creating the Index and unable to upload the data, Service Elasticsearch and Logstash both are running..
Below is the details.. However I do not see anything on he logs.
Elastic config:
[root#aruba-elk2 rm_logs]# cat /etc/elasticsearch/elasticsearch.yml
# Elasticserach config
#########################
cluster.name: log-cohort-test
node.name: aruba-elk2
node.master: true
path:
data: /elk/lib/elasticsearch
logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
bootstrap.system_call_filter: False
[root#aruba-elk2 rm_logs]#
[root#aruba-elk2 rm_logs]#
LOGSTASH COnfig:
[root#aruba-elk2 rm_logs]# cat /etc/logstash/logstash.yml
path.data: /var/lib/logstash
path.logs: /var/log/logstash
[root#aruba-elk2 rm_logs]# cat /etc/logstash/conf.d/logstash-syslog.conf
input {
file {
path => [ "/elk/rm_logs/*.txt" ]
type => "rmlog"
}
}
filter {
if [type] == "rmlog" {
grok {
match => { "message" => "%{HOSTNAME:hostname},%{DATE:date},%{HOUR:hour1}:%{MINUTE:minute1},%{NUMBER}-%{WORD},%{USER:user},%{USER:user2} %{NUMBER:pid} %{NUMBER:float} %{NUMBER:float} %{NUMBER:number1} %{NUMBER:number2} %{DATA} %{HOUR:hour2}:%{MINUTE:minute2} %{HOUR:hour3}:%{MINUTE:minute3} %{GREEDYDATA:command},%{PATH:path}" }
add_field => [ "received_at", "%{#timestamp}" ]
}
}
}
output {
if [type] == "rmlog" {
elasticsearch {
hosts => ["aruba-elk2:9200"]
manage_template => false
index => "rmlog-%{+YYYY.MM.dd}"
#document_type => "messages"
}
}
}
Input data Source:
[root#aruba-elk2 rm_logs]# cd /elk/rm_logs/
[root#aruba-elk2 rm_logs]# ls -ltrh | head
total 2.6M
-rw-r--r-- 1 root root 558 Jan 11 11:27 dbxchw092.txt
-rw-r--r-- 1 root root 405 Jan 11 11:27 dbxtx220.txt
-rw-r--r-- 1 root root 241 Jan 11 11:27 dbxcvm139.txt
-rw-r--r-- 1 root root 455 Jan 11 11:27 dbxcnl038.txt
-rw-r--r-- 1 root root 230 Jan 11 11:27 dbxchw052.txt
-rw-r--r-- 1 root root 143 Jan 11 11:27 dbxtx222.txt
-rw-r--r-- 1 root root 577 Jan 11 11:27 dbxtx224.txt
-rw-r--r-- 1 root root 274 Jan 11 11:27 dbxcvm082.txt
-rw-r--r-- 1 root root 281 Jan 11 11:27 dbxcsb003.txt
Sample of above data file:
testhost-in2,19/01/11,06:34,04-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:00 rm -rf /test/ehf/users/arnav-090119-184844,/dv/ehf/users/arnav-090119-
testhost-in2,19/01/11,06:40,09-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:00 rm -rf /dv/ehf/users/arnav-090119-184844,/dv/ehf/users/arnav-090119-\
testhost-in2,19/01/11,06:45,14-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:01 rm -rf /
LOGS:
Logstash logs:
[root#aruba-elk2 logstash]# cat logstash-plain.log
[2019-01-12T23:48:31,653][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2019-01-12T23:48:34,959][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-01-12T23:48:35,374][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://aruba-elk2:9200/]}}
[2019-01-12T23:48:35,588][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://aruba-elk2:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://aruba-elk2:9200/][Manticore::SocketException] Connection refused"}
[2019-01-12T23:48:35,608][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//aruba-elk2:9200"]}
[2019-01-12T23:48:36,063][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_076330d5fd2c2b811bc1960a3d0547be", :path=>["/elk/rm_logs/*.txt"]}
[2019-01-12T23:48:36,095][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x424bb675 run>"}
[2019-01-12T23:48:36,155][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-01-12T23:48:36,156][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-12T23:48:36,542][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-12T23:48:40,796][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://aruba-elk2:9200/"}
[2019-01-12T23:48:40,855][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-01-12T23:48:40,859][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Elasticsearch LOGS:
[root#aruba-elk2 elasticsearch]# cat gc.log.0.current| tail
2019-01-13T00:13:29.280+0530: 1237.781: Total time for which application threads were stopped: 0.0002681 seconds, Stopping threads took: 0.0000316 seconds
2019-01-13T00:13:31.281+0530: 1239.782: Total time for which application threads were stopped: 0.0003670 seconds, Stopping threads took: 0.0000586 seconds
2019-01-13T00:13:32.281+0530: 1240.782: Total time for which application threads were stopped: 0.0003134 seconds, Stopping threads took: 0.0000708 seconds
2019-01-13T00:13:37.282+0530: 1245.783: Total time for which application threads were stopped: 0.0004663 seconds, Stopping threads took: 0.0001315 seconds
2019-01-13T00:13:51.284+0530: 1259.785: Total time for which application threads were stopped: 0.0004230 seconds, Stopping threads took: 0.0000691 seconds
2019-01-13T00:13:57.286+0530: 1265.787: Total time for which application threads were stopped: 0.0008421 seconds, Stopping threads took: 0.0002697 seconds
2019-01-13T00:13:58.287+0530: 1266.787: Total time for which application threads were stopped: 0.0004467 seconds, Stopping threads took: 0.0000706 seconds
2019-01-13T00:14:11.288+0530: 1279.789: Total time for which application threads were stopped: 0.0004702 seconds, Stopping threads took: 0.0001105 seconds
2019-01-13T00:14:18.289+0530: 1286.790: Total time for which application threads were stopped: 0.0004123 seconds, Stopping threads took: 0.0000750 seconds
Any help will be appreciated..
Trying to do a "curl http://localhost:9200" but getting "Failed connection refused" Firewalld is off and elasticsearch.yml file settings are set to default. Below is a portion of the yml file.
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/log/elasticsearch
#
# Path to log files:
#
path.logs: /var/data/elasticsearch
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
Below is a tail of the elasticsearch.log file:
[2018-03-29T07:06:02,094][INFO ][o.e.c.s.MasterService ] [TBin_UP] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}
[2018-03-29T07:06:02,105][INFO ][o.e.c.s.ClusterApplierService] [TBin_UP] new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}, reason: apply cluster state (from master [master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-03-29T07:06:02,148][INFO ][o.e.g.GatewayService ] [TBin_UP] recovered [0] indices into cluster_state
[2018-03-29T07:06:02,155][INFO ][o.e.h.n.Netty4HttpServerTransport] [TBin_UP] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-03-29T07:06:02,155][INFO ][o.e.n.Node ] [TBin_UP] started
[2018-03-29T07:06:02,445][INFO ][o.e.m.j.JvmGcMonitorService] [TBin_UP] [gc][14] overhead, spent [300ms] collecting in the last [1s]
[2018-03-29T07:14:50,259][INFO ][o.e.n.Node ] [TBin_UP] stopping ...
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] stopped
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] closing ...
[2018-03-29T07:14:50,620][INFO ][o.e.n.Node ] [TBin_UP] closed
Service status:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-29 08:05:46 EDT; 2min 38s ago
Docs: http://www.elastic.co
Process: 22384 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 22384 (code=exited, status=1/FAILURE)
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,668 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,670 main ERROR Unable to locate appender "rolling" for logger config "root"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logger config "index.indexing.slowlog.index"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger config "index.search.slowlog"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,672 main ERROR Unable to locate appender "deprecation_rolling" for logger config "org.elasticsearch.deprecation"
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Mar 29 08:05:46 satyr systemd[1]: Unit elasticsearch.service entered failed state.
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service failed.
Im a newbie to apache camle and
Lately Ive been trying to make a post request to a HTTPS Rest API.
I have gone through many posts and documentation but still I couldnt get a gist of this.
Please find my code below
**
from("timer:aTimer?period=20s")
.process(ex->ex.getIn().setBody(
"{\n" +
" \"userId\": 777,\n" +
" \"title\": \"sample\",\n" +
" \"body\": \"my body\"\n" +
" }"
))
.setHeader(Exchange.HTTP_METHOD,constant("POST"))
.setHeader(Exchange.CONTENT_TYPE,constant("application/json"))
.to("restlet:https://jsonplaceholder.typicode.com/posts")
.log("${body}");**
Whenever I run my application im getting the below error.
Started
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is starting
INFO ManagedManagementStrategy - JMX is enabled
INFO DefaultTypeConverter - Type converters loaded (core: 192, classpath: 14)
INFO DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
Mar 05, 2018 3:20:45 PM org.restlet.ext.httpclient.HttpClientHelper start
INFO: Starting the Apache HTTP client
INFO DefaultCamelContext - Route: route1 started and consuming from: timer://aTimer?period=20s
INFO DefaultCamelContext - Total 1 routes, of which 1 are started
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) started in 0.879 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is shutting down
INFO DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 300 seconds)
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 300 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=3018]
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 299 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=4020]
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 298 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=5023]
Mar 05, 2018 3:20:51 PM org.restlet.ext.httpclient.internal.HttpMethodCall sendRequest
WARNING: An error occurred during the communication with the remote HTTP server.
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)
at sun.security.ssl.InputRecord.read(InputRecord.java:527)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at org.apache.http.conn.ssl.SSLSocketFactory.createLayeredSocket(SSLSocketFactory.java:573)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:557)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:414)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:134)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:339)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:363)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
WARN TimerConsumer - Error processing exchange. Exchange[ID-ubuntu-Latitude-6430U-1520243444162-0-1]. Caused by: [org.apache.camel.component.restlet.RestletOperationException - Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server]
org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
at org.apache.camel.component.restlet.RestletProducer.populateRestletProducerException(RestletProducer.java:304)
at org.apache.camel.component.restlet.RestletProducer$1.handle(RestletProducer.java:190)
at org.restlet.engine.adapter.ClientAdapter$1.handle(ClientAdapter.java:90)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:371)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
ERROR DefaultErrorHandler - Failed delivery for (MessageId: ID-ubuntu-Latitude-6430U-1520243444162-0-2 on ExchangeId: ID-ubuntu-Latitude-6430U-1520243444162-0-1). Exhausted after delivery attempt: 1 caught: org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[route1 ] [route1 ] [timer://aTimer?period=20s ] [ 5321]
[route1 ] [process1 ] [Processor#0x33ae3bf8 ] [ 4]
[route1 ] [setHeader1 ] [setHeader[CamelHttpMethod] ] [ 0]
[route1 ] [setHeader2 ] [setHeader[Content-Type] ] [ 0]
[route1 ] [to1 ] [restlet:https://jsonplaceholder.typicode.com/443:posts ] [ 5308]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
at org.apache.camel.component.restlet.RestletProducer.populateRestletProducerException(RestletProducer.java:304)
at org.apache.camel.component.restlet.RestletProducer$1.handle(RestletProducer.java:190)
at org.restlet.engine.adapter.ClientAdapter$1.handle(ClientAdapter.java:90)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:371)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Mar 05, 2018 3:20:52 PM org.restlet.ext.httpclient.HttpClientHelper stop
INFO: Stopping the HTTP client
INFO DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: timer://aTimer?period=20s
INFO DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 3 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) uptime 7.927 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is shutdown in 3.048 seconds
Please help me.. I've also tried to use Apache HTTP4 component but still no luck.
I have deployed a elasticsearch 2.2.0 Now, I'm sending logs to it using a td-agent 2.3.0-0.
The final on tag chain was..
<match extra.geoip.processed5.**>
type copy
<store>
type file
path /var/log/td-agent/sp_l5
time_slice_format %Y%m%d
time_slice_wait 10m
time_format %Y%m%dT%H%M%S%z
compress gzip
utc
</store>
<store>
type elasticsearch
host 11.0.0.174
port 9200
logstash_format true
logstash_prefix logstash_business
logstash_dateformat %Y.%m
flush_interval 5s
</store>
</match>
Now, I added a time out within the elasticsearch type block
request_timeout 45s
This is the td-agent.log with debug enabled.
2016-02-08 15:58:07 +0100 [info]: plugin/in_syslog.rb:176:listen: listening syslog socket on 0.0.0.0:5514 with udp
2016-02-08 15:59:03 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 15:59:03 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:03:33 +0100 [warn]: plugin/out_elasticsearch.rb:200:rescue in send: Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-02-08 16:03:33 +0100 [warn]: plugin/out_elasticsearch.rb:200:rescue in send: Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-02-08 16:03:35 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:03:35 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:08:05 +0100 [warn]: plugin/out_elasticsearch.rb:200:rescue in send: Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-02-08 16:08:05 +0100 [warn]: plugin/out_elasticsearch.rb:200:rescue in send: Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-02-08 16:08:09 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:08:09 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:12:40 +0100 [warn]: fluent/output.rb:354:rescue in try_flush: temporarily failed to flush the buffer. next_retry=2016-02-08 15:59:04 +0100 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Could not push logs to Elasticsearch after 2 retries. read timeout reached" plugin_id="object:fd9738"
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:204:in `rescue in send'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:194:in `send'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:188:in `write'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/buffer.rb:345:in `write_chunk'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/buffer.rb:324:in `pop'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/output.rb:321:in `try_flush'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/output.rb:140:in `run'
2016-02-08 16:12:40 +0100 [warn]: fluent/output.rb:354:rescue in try_flush: temporarily failed to flush the buffer. next_retry=2016-02-08 15:59:04 +0100 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Could not push logs to Elasticsearch after 2 retries. read timeout reached" plugin_id="object:1034980"
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:204:in `rescue in send'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:194:in `send'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:188:in `write'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/buffer.rb:345:in `write_chunk'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/buffer.rb:324:in `pop'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/output.rb:321:in `try_flush'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/output.rb:140:in `run'
I'm running this on AWS over ubuntu 14.04 c3.large. I have tested the access from td-agent machine creating a index, adding documents and deleting the index using curl without any problem. (To be sure, I opened all communications in my Security Groups)
Another test from td-agent machine...
root#bilbo:~# telnet 11.0.0.174 9200
Trying 11.0.0.174...
Connected to 11.0.0.174.
Escape character is '^]'.
GET / HTTP/1.0
HTTP/1.0 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 320
{
"name" : "gandalf-gandalf",
"cluster_name" : "aaaa_dev",
"version" : {
"number" : "2.2.0",
"build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",
"build_timestamp" : "2016-01-27T13:32:39Z",
"build_snapshot" : false,
"lucene_version" : "5.4.1"
},
"tagline" : "You Know, for Search"
}
Connection closed by foreign host.
root#bilbo:~#
With strace I can see this
[pid 10774] connect(23, {sa_family=AF_INET, sin_port=htons(9200), sin_addr=inet_addr("11.0.0.174")}, 16) = -1 EINPROGRESS (Operation now in progress)
[pid 10774] clock_gettime(CLOCK_MONOTONIC, {4876, 531526283}) = 0
[pid 10774] select(24, NULL, [23], NULL, {45, 0}) = 1 (out [23], left {44, 999925})
[pid 10774] fcntl(23, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
[pid 10774] connect(23, {sa_family=AF_INET, sin_port=htons(9200), sin_addr=inet_addr("11.0.0.174")}, 16) = 0
[pid 10774] fcntl(23, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
[pid 10774] write(23, "POST /_bulk HTTP/1.1\r\nUser-Agent: Faraday v0.9.2\r\nHost: 11.0.0.174:9200\r\nContent-Length: 4798\r\n\r\n", 97) = 97
[pid 10774] fcntl(23, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
[pid 10774] write(23, "{\"index\":{\"_index\":\"logstash_apache-2016.02.08\",\"_type\":\"fluentd\"}}\n{\"message\":\"Feb 8 16:18:47 bilbo ::apache::PRE::access: - 10.0.0.15 - control [08/Feb/2016:16:18:47 +0100] \\\"GET /status/memcached.php HTTP/1.1\\\" 200 1785 \\\"-\\\" \\\"check_http/v2.0 (monitoring-plugins 2.0)\\\" control-pre.fluzo.com:443 0\",\"n\":\"bilbo\",\"s\":\"info\",\"f\":\"local3\",\"t\":\"Feb 8 16:18:47\",\"h\":\"bilbo\",\"a\":\"apache\",\"e\":\"PRE\",\"o\":\"access\",\"ip\":\"-\",\"ip2\":\"10.0.0.15\",\"rl\":\"-\",\"ru\":\"control\",\"rt\":\"[08/Feb/2016:16:18:47 +0100]\",\"met\":\"GET\",\"pqf\":\"status/memcached.php\",\"hv\":\"HTTP/1.1\",\"st\":\"200\",\"bs\":\"1785\",\"ref\":\"-\",\"ua\":\"check_http/v2.0 (monitoring-plugins 2.0)\",\"vh\":\"control-pre.aaaa.com\",\"p\":\"443\",\"rpt\":\"0\",\"co\":null,\"ci\":null,\"la\":null,\"lo\":null,\"ar\":null,\"dm\":null,\"re\":null,\"#timestamp\":\"2016-02-08T16:19:47+01:00\"}\n{\"index\":{\"_index\":\"logstash_apache-2016.02.08\",\"_type\":\"fluentd\"}}\n{\"message\":\"Feb 8 16:18:47 bilbo ::apache::PRE::access: - 10.0.0.15 - control [08/Feb/2016:16:18:47 +0100] \\\"GET /status/core.php HTTP/1.1\\\" 200 1898 \\\"-\\\" \\\"c"..., 4798) = 4798
Seems that the connection is open.
To avoid confusion, I have installed td-agent on elasticsearch machine with the same output.
This is my elasticsearch configuration...
### MANAGED BY PUPPET ###
---
bootstrap:
mlockall: true
cluster:
name: aaaa0_dev
discovery:
zen:
minimum_master_nodes: 1
ping:
multicast:
enabled: false
unicast:
hosts:
- 11.0.0.174
gateway:
expected_nodes: 1
recover_after_nodes: 1
recover_after_time: 5m
hostname: gandalf
http:
compression: true
index:
store:
compress:
stored: true
type: niofs
network:
bind_host: 11.0.0.174
publish_host: 11.0.0.174
node:
name: gandalf-gandalf
path:
data: /var/lib/elasticsearch-gandalf
logs: /var/log/elasticsearch/gandalf
transport:
tcp:
compress: true
Any idea?
Thanks.
UPDATE
Against a real elasticsearch cluster works (3 nodes)
This is the configuration.
### MANAGED BY PUPPET ###
---
bootstrap:
mlockall: true
cluster:
name: aaaa
discovery:
zen:
minimum_master_nodes: 2
ping:
multicast:
enabled: false
unicast:
hosts:
- el0
- el1
- el2
gateway:
expected_nodes: 3
recover_after_nodes: 2
recover_after_time: 5m
hostname: kili
http:
compression: true
index:
store:
compress:
stored: true
type: niofs
network:
bind_host: 11.0.0.253
publish_host: 11.0.0.253
node:
name: kili-kili
path:
data:
- /var/lib/elasticsearch0
- /var/lib/elasticsearch1
logs: /var/log/elasticsearch/kili
transport:
tcp:
compress: true
However, kibana did create .kibana index into elasticsearch with one node. Also, y test this node with curl.