I am trying to read logs to load them into elasticsearch using logstash im running it in a RHEL 7.9 integrated with elasticsearch and Kibana, but when I run it it stops at:
[INFO ] 2021-10-22 13:40:00.704 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
My file config is:
file {
path => [
"/home/logstash/connectors.log"
]
start_position => "beginning"
}
}
filter {
grok {
break_on_match => false
match => {
"message" => "%{TIMESTAMP_ISO8601:fecha} \[(?<threadname>[^\]]+)\] %{LOGLEVEL:loglevel}\s*\(%{JAVAFILE:file}:%{INT:line}\)\s*-\s*Dato\s*a\s*enviar:\s*\[%{GREEDYDATA:xml}\]"
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:8087"]
document_id => "%{[#metadata][fingerprint]}"
index => "wilobank-%{+YYYY.MM.dd}"
}
}
And the log of execution is:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-10-22 13:48:29.490 [main] runner - Starting Logstash {"logstash.version"=>"7.15.1", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.12+7 on 11.0.12+7 +indy +jit [linux-x86_64]"}
[WARN ] 2021-10-22 13:48:29.818 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-10-22 13:48:30.990 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-10-22 13:48:31.589 [Converge PipelineAction::Create<main>] Reflections - Reflections took 73 ms to scan 1 urls, producing 120 keys and 417 values
[WARN ] 2021-10-22 13:48:32.589 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-10-22 13:48:32.648 [Converge PipelineAction::Create<main>] file - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-10-22 13:48:32.970 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-10-22 13:48:33.021 [Converge PipelineAction::Create<main>] elasticsearch - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2021-10-22 13:48:33.085 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:8087"]}
[INFO ] 2021-10-22 13:48:33.418 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:8087/]}}
[WARN ] 2021-10-22 13:48:33.566 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:8087/"}
[INFO ] 2021-10-22 13:48:33.616 [[main]-pipeline-manager] elasticsearch - Elasticsearch version determined (7.15.1) {:es_version=>7}
[WARN ] 2021-10-22 13:48:33.618 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[WARN ] 2021-10-22 13:48:33.705 [[main]-pipeline-manager] grok - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2021-10-22 13:48:33.741 [Ruby-0-Thread-10: :1] elasticsearch - Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[INFO ] 2021-10-22 13:48:33.880 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/home/logstash/test.conf"], :thread=>"#<Thread:0x7a96f84a run>"}
[INFO ] 2021-10-22 13:48:34.785 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.9}
[INFO ] 2021-10-22 13:48:34.840 [[main]-pipeline-manager] file - No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_a171bd20c3269483fada27f50b68caf2", :path=>["/home/logstash/itecban-connectors.log"]}
[INFO ] 2021-10-22 13:48:34.860 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2021-10-22 13:48:34.897 [[main]<file] observingtail - START, creating Discoverer, Watch with file and sincedb collections
[INFO ] 2021-10-22 13:48:34.916 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Can someone help me?
Thanks in advance
If you want to parse again a complete file, you need to :
delete sindedb files
OR only delete the corresponding line in sincedb file
Then, restart Logstash. Logstash will reparse the file.
For more info: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#sincedb_path
Related
I have been trying for some time to send a simple log to Elasticsearch and after trying a very simple example, the logs are not been sent to Elasticsearch from logstash.
Services: In same server for this test
Operative Sytem: Centos 7
The logstash version is: 7.17.1
The Elasticsearch version is: 7.17.1
/etc/logstash/conf.d
input {
file {
path => "/var/log/Elasticsearch/Elasticsearch.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
}
output {
Elasticsearch {
hosts => ["localhost:9200"]
}
}
/var/log/logstash/logstash-plain.log
[2022-03-18T11:33:30,690][INFO ][org.reflections.Reflections] Reflections took 118 ms to scan 1 urls, producing 119 keys and 417 values
[2022-03-18T11:33:32,042][INFO ][logstash.outputs.Elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["//localhost:9200"]}
[2022-03-18T11:33:32,540][INFO ][logstash.outputs.Elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2022-03-18T11:33:32,889][WARN ][logstash.outputs.Elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-03-18T11:33:32,908][INFO ][logstash.outputs.Elasticsearch][main] Elasticsearch version determined (7.17.1) {:es_version=>7}
[2022-03-18T11:33:32,913][WARN ][logstash.outputs.Elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2022-03-18T11:33:33,037][INFO ][logstash.outputs.Elasticsearch][main] Config is not compliant with data streams. data_stream => auto resolved to false
[2022-03-18T11:33:33,113][INFO ][logstash.outputs.Elasticsearch][main] Config is not compliant with data streams. data_stream => auto resolved to false
[2022-03-18T11:33:33,311][INFO ][logstash.outputs.Elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-03-18T11:33:33,337][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x15acb961 run>"}
[2022-03-18T11:33:34,573][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.23}
[2022-03-18T11:33:34,664][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-03-18T11:33:34,761][INFO ][filewatch.observingtail ][main][2b6c69038f817ebf29690e5d479fe4c6e56f482b9d6cc052978d217447903269] START, creating Discoverer, Watch with file and sincedb collections
[2022-03-18T11:33:34,771][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
/var/log/Elasticsearch/Elasticsearch.log
[2022-03-18T01:30:00,079][INFO ][o.e.x.m.MlDailyMaintenanceService] [ip-.eu-west-2.compute.internal] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2022-03-18T10:23:44,010][INFO ][o.e.c.m.MetadataIndexTemplateService] [ip-.eu-west-2.compute.internal] adding template [logstash] for index patterns [logstash-]
[2022-03-18T10:23:44,189][INFO ][o.e.c.m.MetadataCreateIndexService] [ip-.eu-west-2.compute.internal] [logstash-2022.03.18-000001] creating index, cause [api], templates [logstash], shards [1]/[1]
[2022-03-18T10:23:44,522][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [ip-.eu-west-2.compute.internal] adding index lifecycle policy [logstash-policy]
[2022-03-18T10:23:44,603][INFO ][o.e.x.i.IndexLifecycleTransition] [ip-.eu-west-2.compute.internal] moving index [logstash-2022.03.18-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [logstash-policy]
[2022-03-18T10:23:44,671][INFO ][o.e.x.i.IndexLifecycleTransition] [ip-.eu-west-2.compute.internal] moving index [logstash-2022.03.18-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [logstash-policy]
[2022-03-18T10:23:44,726][INFO ][o.e.x.i.IndexLifecycleTransition] [ip-.eu-west-2.compute.internal] moving index [logstash-2022.03.18-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [logstash-policy]
[2022-03-18T10:23:55,371][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ip-*.eu-west-2.compute.internal] low disk watermark [85%] exceeded on [r51WwHrKTE-VK6UCAaR4IA][ip-*8.eu-west-2.compute.internal][/var/lib/Elasticsearch/nodes/0] free: 1.1gb[14.1%], replicas will not be assigned to this node
Any help would be really appreciated : )
You need to specify the index in the output.
Pay attention, that you have free space problem in the Elasticsearch: low disk watermark [85%] exceeded.
I am trying to run the ELK stack using docker. But unfortunately, logstash container is not running and I am unable to find the exact error why it's failing.
Here is my docker-compose file:
version: '3.7'
services:
elasticsearch:
image: elasticsearch:7.9.2
ports:
- '9200:9200'
networks:
- elk
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
logstash:
image: logstash:7.9.2
ports:
- '5000:5000'
networks:
- elk
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
logstash.yml
---
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
## X-Pack security credentials
#
xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: elastic
#xpack.monitoring.elasticsearch.password: changeme
logstash.conf
input{
file{
path => "C:\Users\User1\Downloads\library-mgmt-system-logs\user-service\user-service.log"
start_position => "beginning"
}
}
output{
elasticsearch{
hosts => "elasticsearch:9200"
index => "library-mgmt-system-logstash-index"
ecs_compatibility => disabled
}
}
logstash shutdown logs:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby280139731768845147jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2021-08-01T08:42:44,135][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
[2021-08-01T08:42:44,172][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2021-08-01T08:42:44,184][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2021-08-01T08:42:44,578][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"b15dc5df-3deb-4698-aa37-e114a733bfa9", :path=>"/usr/share/logstash/data/uuid"}
[2021-08-01T08:42:45,186][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2021-08-01T08:42:46,007][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2021-08-01T08:42:46,306][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2021-08-01T08:42:46,394][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2021-08-01T08:42:46,399][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-08-01T08:42:46,642][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2021-08-01T08:42:46,644][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2021-08-01T08:42:48,382][INFO ][org.reflections.Reflections] Reflections took 32 ms to scan 1 urls, producing 22 keys and 45 values
[2021-08-01T08:42:48,706][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2021-08-01T08:42:48,706][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2021-08-01T08:42:48,725][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2021-08-01T08:42:48,725][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2021-08-01T08:42:48,736][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-08-01T08:42:48,736][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
[2021-08-01T08:42:48,736][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-08-01T08:42:48,736][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-08-01T08:42:48,785][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
[2021-08-01T08:42:48,788][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2021-08-01T08:42:48,793][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2021-08-01T08:42:48,833][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-08-01T08:42:48,879][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0xb20b7c7#/usr/share/logstash/logstash-core/lib/logstash/pipelines_registry.rb:141 run>"}
[2021-08-01T08:42:48,888][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x62ff495a#/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:122 run>"}
[2021-08-01T08:42:48,901][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-08-01T08:42:48,931][INFO ][logstash.outputs.elasticsearch][main] Installing elasticsearch template to _template/logstash
[2021-08-01T08:42:49,686][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.81}
[2021-08-01T08:42:49,688][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.8}
[2021-08-01T08:42:49,730][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2021-08-01T08:42:50,840][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2021-08-01T08:42:51,147][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-08-01T08:42:53,108][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2021-08-01T08:42:53,162][INFO ][logstash.runner ] Logstash shut down.
I resolved this issue. Please refer the below updated files
docker-compose.yaml
logstash:
image: logstash:7.13.4
ports:
- '5000:5000'
networks:
- elk
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
- type: bind
source: C:/Users/Rupesh_Patil/Desktop/logstash-data
target: /usr/share/logs/
read_only: true
depends_on:
- elasticsearch
logstash.conf
input{
file{
type=>"user"
path=>"/usr/share/logs/user-service/user-service.log"
start_position=>"beginning"
}
}
output{
elasticsearch{
hosts => "elasticsearch:9200"
index => "library-mgmt-system-logstash-index"
ecs_compatibility => disabled
}
}
I am trying to create an index in elasticsearch using a csv file. Below is the configuration.
input {
file {
path => "C:\Users\soumdash\Desktop\Accounts.csv"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
csv{
separator => ","
columns => ["Country_code","Account_number","User_ID","Date","Time"]
}
mutate {convert => ["Account_number","integer"]}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "accounts"
}
stdout {}
}
I am starting the logstash and from the console I can see that it has bee started and the pipeline has been created. But I cannot see the same index in kibana.
C:\Users\soumdash\Desktop\logstash-7.2.0\bin>logstash -f logstash-account.conf
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to C:/Users/soumdash/Desktop/logstash-7.2.0/logs which is now configured via log4j2.properties
[2019-07-26T14:01:27,662][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-07-26T14:01:27,711][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-26T14:01:42,181][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"accounts", id=>"b54e1c07198cf188279cb051e01c9fe6118db48fe2ce76739dc2ace82e02c078", hosts=>[//localhost:9200], document_type=>"ERC_Acoounts", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_57f41853-7ddf-48e5-a5e4-316d94c83a0f", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-07-26T14:01:46,248][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-07-26T14:01:46,752][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-07-26T14:01:46,852][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-07-26T14:01:46,862][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-26T14:01:46,910][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-07-26T14:01:47,046][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-07-26T14:01:47,205][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-07-26T14:01:47,236][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-07-26T14:01:47,236][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x26c630b8 run>"}
[2019-07-26T14:01:52,105][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-07-26T14:01:52,232][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-07-26T14:01:52,249][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-07-26T14:01:53,290][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I have checked and tried a few other answers on the same issue such as
Logstash creates pipeline but index is not created and
Logstash is not creating index in elastic search
But with no success result.
Can anyone please help? I am using ELK 7.2.
Can you use rubydebug inside of stdout,just to make sure that you file is read?
I have script that logs temperature + humidity from diffrent sensors and stores the data from each sensor to his directory and every day a new log is made in this format YYYY-MM-DD.log.
${data_root}/A/0/*.log
${data_root}/A/1/*.log
ETC..
the logs are in this format:
2018-03-02 03:48:14 25.00 27.10
(YYYY-MM-DD TIME Temperature Humidity)
I had trouble with understanding how to correctly config my logstash instance, I figured that my input should look something like this:
input {
file{ path => "/var/wlogs/a1/*.log" type=>"a1"}
file{ path => "/var/wlogs/a2/*.log" type=>"a2"}
etc..
}
and the filter should look something like this:
filter{
if [type] == "a1" {
grok {
match => { "message" => "(?<timestamp>%{YEAR}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME}) %{NUMBER:temperature:float} %{NUMBER:humidity:float}" }
}
}
if [type] == "a2" {....}
Im trying to export the the data in the output section to ElasticSearch with no success.
output{
elasticsearch { hosts =>["ec2-xxxxxx.eu-west-2.compute.amazonaws.com:9200"] user=>"elastic" password=>"pass" index=>"{type}"}
stdout{ codec => rubydebug}
}
here is the console output when I try to run it:
ubuntu#ip-xxx-xxx:/usr/share/logstash$ sudo bin/logstash -f ~/logstash.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-03-02 13:43:34.633 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-03-02 13:43:34.647 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-03-02 13:43:35.063 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-03-02 13:43:35.209 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.2"}
[INFO ] 2018-03-02 13:43:35.430 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-03-02 13:43:36.145 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-03-02 13:43:36.318 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#ec2-no.eu-west-2.compute.amazonaws.com:9200/]}}
[INFO ] 2018-03-02 13:43:36.327 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#ec2-no.eu-west-2.compute.amazonaws.com:9200/, :path=>"/"}
[WARN ] 2018-03-02 13:43:36.447 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://elastic:xxxxxx#ec2-3no3.eu-west-2.compute.amazonaws.com:9200/"}
[INFO ] 2018-03-02 13:43:36.610 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>nil}
[WARN ] 2018-03-02 13:43:36.611 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-03-02 13:43:36.616 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-03-02 13:43:36.619 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-03-02 13:43:36.626 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//ec2-no.eu-west-2.compute.amazonaws.com:9200"]}
[INFO ] 2018-03-02 13:43:37.054 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x25b5f422#/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}
[INFO ] 2018-03-02 13:43:37.081 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}
please help me figure out what I'm doing wrong and how to fix it :)
thanks in advance
P.S: Im using the latest versions of ElasticSearch, Kibana and Logstash
Don't see any error in the logs. Makes me think that the log files might have already been read in a previous attempt. Since the file offsets are maintained in the sincedb file in home directory, can you stop logstash, delete the file and try again?
For more details about the sincedb file, refer to https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
Log stash is 100% a disaster for me. I am using LS 1.4.1 and ES 1.02 in the same machine.
Here is how I start logstash indexer:
/usr/local/share/logstash-1.4.1/bin/logstash -f /usr/local/share/logstash.indexer.config
input {
redis {
host => "redis.queue.do.development.sf.test.com"
data_type => "list"
key => "logstash"
codec => json
}
}
output {
stdout { }
elasticsearch {
bind_host => "127.0.0.1"
port => "9300"
}
}
ES I set:
network.bind_host: 127.0.0.1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300"]
And wow..this is what I get:
/usr/local/share/logstash-1.4.1/bin/logstash -f /usr/local/share/logstash.indexer.config
Using milestone 2 input plugin 'redis'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones {:level=>:warn}
log4j, [2014-05-29T12:02:29.545] WARN: org.elasticsearch.discovery: [logstash-do-logstash-sf-development-20140527082230-866-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:744)
See http://logstash.net/docs/1.4.1/outputs/elasticsearch
VERSION NOTE: Your Elasticsearch cluster must be running Elasticsearch 1.1.1. If you use any other version of Elasticsearch, you should set protocol => http in this plugin.
So your problem is that logstash doesn't support the older ES version you are using without using an http transport.
Setting 'protocol => "http"' worked for me. I expected the EPEL repo to have complementary versions of logstash and elasticsearch, but ES is used for lots of stuff, thus is not tightly coupled with the logstash rpms.
For me, the problem wasn't with the versions of elasticsearch or logstash. I had just installed them and I was using the latest version of each (1.5.0 & 1.4.2 respectively).
Running the following worked for me as well:
logstash -e 'input { stdin { } } output { elasticsearch { protocol => "http" } }'
But I wanted to get to the bottom of why I wasn't able to connect over the other protocols. Though the documentation doesn't say what the default protocol is, I was pretty sure I was either using transport or node for port 9300 by default because of the following output I got when I started elasticsearch
[2015-04-14 22:21:56,355][INFO ][node ] [Super-Nova] version[1.5.0], pid[10796], build[5448160/2015-03-23T14:30:58Z]
[2015-04-14 22:21:56,355][INFO ][node ] [Super-Nova] initializing ...
[2015-04-14 22:21:56,358][INFO ][plugins ] [Super-Nova] loaded [], sites []
[2015-04-14 22:21:58,186][INFO ][node ] [Super-Nova] initialized
[2015-04-14 22:21:58,187][INFO ][node ] [Super-Nova] starting ...
[2015-04-14 22:21:58,257][INFO ][transport ] [Super-Nova] bound_address {inet[/127.0.0.1:9300]}, publish_address {inet[/127.0.0.1:9300]}
[2015-04-14 22:21:58,273][INFO ][discovery ] [Super-Nova] elasticsearch/KPaTxb9vRnaNXBncN5KN7g
[2015-04-14 22:22:02,053][INFO ][cluster.service ] [Super-Nova] new_master [Super-Nova][KPaTxb9vRnaNXBncN5KN7g][Azads-MBP-2][inet[/127.0.0.1:9300]], reason: zen-disco-join (elected_as_master)
[2015-04-14 22:22:02,069][INFO ][http ] [Super-Nova] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[/127.0.0.1:9200]}
[2015-04-14 22:22:02,069][INFO ][node ] [Super-Nova] started
At first, I tried opening up port 9300 by following these instructions. That didn't change a thing, so most likely that port wasn't blocked.
Then I stumbled upon this github issue. There wasn't really a solution there that helped, but I did double check to make sure my elasticsearch cluster name was right by checking elasticsearch.yaml (This file is ususally stored where elasticsearch is installed. Run "which elasticsearch" to give you an idea where to look). Lo and behold, my elastisearch cluster.name had my name appended to it. Removing it so that the cluster name was just "elasticsearch" helped logstash discover my elasticsearch instance.