Kibana index pattern repeated hits - elasticsearch

I'm a newbie in elk stack, and now I can't get what I expected in creation of index patterns. Here is the problem:
First, I created a logstash conf file:
input {
file {
path => ["/usr/share/logs_data/log_6.log"]
start_position => "beginning"
}
}
filter {
grok {
match => ["message", "(?<execution_date>\d{4}-\d{2}-\d{2}) (?<execution_time>\d{2}:\d{2}:\d{2})%{GREEDYDATA}/configs/pipelines/(?<crawler_category>([^/])+)/(?<crawler_subcategory>([^-])+)-(?<crawler_name>([^.])+).json"]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "log_6"
}
}
Then, I updated this file, log_6.log, three times:
First:
log_6.log
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)"
In kibana's Discover option: one hit
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:02:32.968 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:uEvZoHcBZfIv8WwqqBlB _type:_doc _index:log_6 _score:0
Second:
log_6.log
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)"
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)"
In kibana's Discover option: three hits
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:02:32.968 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:uEvZoHcBZfIv8WwqqBlB _type:_doc _index:log_6 _score:0
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:01.510 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:ukvaoHcBZfIv8Wwq-hrX _type:_doc _index:log_6 _score:0
crawler_category:8 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:01.515 crawler_subcategory:subcat7 crawler_name:my_file_8 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)" _id:u0vaoHcBZfIv8Wwq-hrd _type:_doc _index:log_6 _score:0
Third:
log_6.log
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)"
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)"
2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/9/subcat9-my_file_9.json)"
In kibana's Discover option: five hits
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:02:32.968 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:uEvZoHcBZfIv8WwqqBlB _type:_doc _index:log_6 _score:0
crawler_category:7 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:01.510 crawler_subcategory:subcat7 crawler_name:my_file_7 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/7/subcat7-my_file_7.json)" _id:ukvaoHcBZfIv8Wwq-hrX _type:_doc _index:log_6 _score:0
crawler_category:8 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:01.515 crawler_subcategory:subcat7 crawler_name:my_file_8 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)" _id:u0vaoHcBZfIv8Wwq-hrd _type:_doc _index:log_6 _score:0
crawler_category:9 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:33.595 crawler_subcategory:subcat9 crawler_name:my_file_9 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/9/subcat9-my_file_9.json)" _id:PEvboHcBZfIv8WwqeBsm _type:_doc _index:log_6 _score:0
crawler_category:8 host:fe299a799115 execution_date:Jan 12, 2021 # 21:00:00.000 #version:1 path:/usr/share/logs_data/log_6.log #timestamp:Feb 14, 2021 # 11:04:33.595 crawler_subcategory:subcat7 crawler_name:my_file_8 execution_time:12:41:17 message:2021-01-13 12:41:17,756 luigi-logger[11977] INFO: *****> Chamando "UpdateDBTables(/home/ubuntu/my_folder/my_software/configs/pipelines/8/subcat7-my_file_8.json)" _id:PUvboHcBZfIv8WwqeBsn _type:_doc _index:log_6 _score:0
Why each time I update log_6.log file with a new log line index pattern gets hits for older entries? For example, why in the third log update it reads second line of log again?
Update: Here is logstash log
Attaching to docker-elk_logstash_1
logstash_1 | Using bundled JDK: /usr/share/logstash/jdk
logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby2302152547405294616jopenssl.jar) to field java.security.MessageDigest.provider
logstash_1 | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2021-02-14T14:02:20,574][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.10.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
logstash_1 | [2021-02-14T14:02:20,651][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash_1 | [2021-02-14T14:02:20,667][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash_1 | [2021-02-14T14:02:21,471][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"0c9439e7-3c67-4b5d-90f4-a99008772803", :path=>"/usr/share/logstash/data/uuid"}
logstash_1 | [2021-02-14T14:02:22,115][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2021-02-14T14:02:23,391][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
logstash_1 | [2021-02-14T14:02:24,743][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-02-14T14:02:25,089][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-02-14T14:02:25,202][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
logstash_1 | [2021-02-14T14:02:25,210][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-02-14T14:02:25,463][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
logstash_1 | [2021-02-14T14:02:25,465][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash_1 | [2021-02-14T14:02:28,676][INFO ][org.reflections.Reflections] Reflections took 49 ms to scan 1 urls, producing 23 keys and 47 values
logstash_1 | [2021-02-14T14:02:29,562][WARN ][deprecation.logstash.outputs.elasticsearchmonitoring] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
logstash_1 | [2021-02-14T14:02:29,798][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-02-14T14:02:29,800][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-02-14T14:02:29,881][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-02-14T14:02:29,892][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-02-14T14:02:29,923][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
logstash_1 | [2021-02-14T14:02:29,924][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-02-14T14:02:29,930][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
logstash_1 | [2021-02-14T14:02:29,931][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-02-14T14:02:30,033][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2021-02-14T14:02:30,059][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
logstash_1 | [2021-02-14T14:02:30,069][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2021-02-14T14:02:30,254][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2021-02-14T14:02:30,344][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2021-02-14T14:02:30,367][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x524db6c0 run>"}
logstash_1 | [2021-02-14T14:02:30,537][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x7719a78e#/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>"}
logstash_1 | [2021-02-14T14:02:31,467][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.1}
logstash_1 | [2021-02-14T14:02:31,528][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.98}
logstash_1 | [2021-02-14T14:02:31,575][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2021-02-14T14:02:31,904][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_4665540243166de885448baafb9de578", :path=>["/usr/share/logs_data/log_6.log"]}
logstash_1 | [2021-02-14T14:02:31,941][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2021-02-14T14:02:32,064][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2021-02-14T14:02:32,088][INFO ][filewatch.observingtail ][main][1564967280b5861f1e98faa762e5f84d80b5a693bf98ecd54f18d1bcfc26ea2d] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2021-02-14T14:02:32,906][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Update 2: Also, here is my sincedb file, after appending the three log lines:
18616173 0 2050 486 1613311473.59608 /usr/share/logs_data/log_6.log
18616174 0 2050 324 1613311441.515639
If I understood correctly, it is correct, because 324 is the quantity of characters before last file update and 486 is the quantity of characters after the file update.
I also checked the elastic user interface in localhost port, for index log_6, in Index Management session:
It is saying that 5 as Docs Count value, which makes me think if the data is reaching Kibana the right way...

Related

Not able to use bind-mount volumes with Elasticsearch used in a podman container

I'm new at Elasticsearch (ES) and I'm currently set a customized podman container ES 8.5.0 installation (rootless install) from ES base RPM repository
In this installation I'm using a dedicated Linux user 'elasticadm' which owns the files into the container and over the local Red Hat Linux 8.5 host
Basically I use the following ownership for the installation on localhost:
/app/elasticsearch/data - /var/log/elasticsearch/elasticsearch.log - /etc/elasticsearch/elasticsearch.yml:
elasticadm: elasticsearch - then after the below error occurred I tried: elasticadm:root (but with no more success)
Whenever I run a Elasticsearch podman container with any mount-bind volumes the installation fails with the following error message
"
Fatal exception while booting Elasticsearch org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml
"
ES podman installation without mount-bind volumes is fine but has no interest of course
I'm able to deploy the container without any bind-mount volumes.
podman run --detach --name es850 --publish 9200:9200 --user=elasticadm localhost/elasticsearch_cust:1.4
podman logs es850
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
[2022-11-09T20:37:41,777][INFO ][o.e.n.Node ] [Prod] version[8.5.0], pid[72], build[rpm/c94b4700cda13820dad5aa74fae6db185ca5c304/2022-10-24T16:54:16.433628434Z], OS[Linux/4.18.0-348.7.1.el8_5.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/19/19+36-2238]
[2022-11-09T20:37:41,782][INFO ][o.e.n.Node ] [Prod] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-11-09T20:37:41,783][INFO ][o.e.n.Node ] [Prod] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-5358173424819503746, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1868m, -Xmx1868m, -XX:MaxDirectMemorySize=979369984, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-11-09T20:37:43,721][INFO ][c.a.c.i.j.JacksonVersion ] [Prod] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-11-09T20:37:45,175][INFO ][o.e.p.PluginsService ] [Prod] loaded module [aggs-matrix-stats]
[2022-11-09T20:37:45,175][INFO ][o.e.p.PluginsService ] [Prod] loaded module [analysis-common]
[2022-11-09T20:37:45,176][INFO ][o.e.p.PluginsService ] [Prod] loaded module [apm]
......
[2022-11-09T20:37:45,190][INFO ][o.e.p.PluginsService ] [Prod] loaded module [x-pack-watcher]
[2022-11-09T20:37:45,191][INFO ][o.e.p.PluginsService ] [Prod] no plugins loaded
[2022-11-09T20:37:48,027][WARN ][stderr ] [Prod] Nov 09, 2022 8:37:48 PM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-09T20:37:48,028][WARN ][stderr ] [Prod] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-09T20:37:48,039][INFO ][o.e.e.NodeEnvironment ] [Prod] using [1] data paths, mounts [[/ (overlay)]], net usable_space [24gb], net total_space [27.8gb], types [overlay]
[2022-11-09T20:37:48,039][INFO ][o.e.e.NodeEnvironment ] [Prod] heap size [1.8gb], compressed ordinary object pointers [true]
[2022-11-09T20:37:48,048][INFO ][o.e.n.Node ] [Prod] node name [Prod], node ID [CvroQFRsTxKqyWfwcOJGag], cluster name [elasticsearch], roles [data_frozen, ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest]
[2022-11-09T20:37:51,831][INFO ][o.e.x.s.Security ] [Prod] Security is enabled
[2022-11-09T20:37:52,214][INFO ][o.e.x.s.a.s.FileRolesStore] [Prod] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2022-11-09T20:37:52,628][INFO ][o.e.x.s.InitialNodeSecurityAutoConfiguration] [Prod] Auto-configuration will not generate a password for the elastic built-in superuser, as we cannot determine if there is a terminal attached to the elasticsearch process. You can use the `bin/elasticsearch-reset-password` tool to set the password for the elastic user.
[2022-11-09T20:37:52,724][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [Prod] [controller/96] [Main.cc#123] controller (64 bit): Version 8.5.0 (Build 3922fab346e761) Copyright (c) 2022 Elasticsearch BV
[2022-11-09T20:37:53,354][INFO ][o.e.t.n.NettyAllocator ] [Prod] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-11-09T20:37:53,381][INFO ][o.e.i.r.RecoverySettings ] [Prod] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2022-11-09T20:37:53,425][INFO ][o.e.d.DiscoveryModule ] [Prod] using discovery type [single-node] and seed hosts providers [settings]
[2022-11-09T20:37:54,888][INFO ][o.e.n.Node ] [Prod] initialized
[2022-11-09T20:37:54,889][INFO ][o.e.n.Node ] [Prod] starting ...
[2022-11-09T20:37:54,901][INFO ][o.e.x.s.c.f.PersistentCache] [Prod] persistent cache index loaded
[2022-11-09T20:37:54,903][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [Prod] deprecation component started
[2022-11-09T20:37:55,011][INFO ][o.e.t.TransportService ] [Prod] publish_address {10.0.2.100:9300}, bound_addresses {[::]:9300}
[2022-11-09T20:37:55,122][WARN ][o.e.b.BootstrapChecks ] [Prod] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2022-11-09T20:37:55,124][INFO ][o.e.c.c.ClusterBootstrapService] [Prod] this node has not joined a bootstrapped cluster yet; [cluster.initial_master_nodes] is set to [Prod]
[2022-11-09T20:37:55,133][INFO ][o.e.c.c.Coordinator ] [Prod] setting initial configuration to VotingConfiguration{CvroQFRsTxKqyWfwcOJGag}
[2022-11-09T20:37:55,327][INFO ][o.e.c.s.MasterService ] [Prod] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw} completing election], term: 1, version: 1, delta: master node changed {previous [], current [{Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}]}
[2022-11-09T20:37:55,352][INFO ][o.e.c.c.CoordinationState] [Prod] cluster UUID set to [_wcBh4-JRtuLqIBXyNhZ5A]
[2022-11-09T20:37:55,370][INFO ][o.e.c.s.ClusterApplierService] [Prod] master node changed {previous [], current [{Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
[2022-11-09T20:37:55,439][INFO ][o.e.r.s.FileSettingsService] [Prod] starting file settings watcher ...
[2022-11-09T20:37:55,447][INFO ][o.e.r.s.FileSettingsService] [Prod] file settings service up and running [tid=51]
[2022-11-09T20:37:55,456][INFO ][o.e.h.AbstractHttpServerTransport] [Prod] publish_address {10.0.2.100:9200}, bound_addresses {[::]:9200}
[2022-11-09T20:37:55,457][INFO ][o.e.n.Node ] [Prod] started {Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}{ml.max_jvm_size=1958739968, ml.allocated_processors_double=4.0, xpack.installed=true, ml.machine_memory=3917570048, ml.allocated_processors=4}
[2022-11-09T20:37:55,510][INFO ][o.e.g.GatewayService ] [Prod] recovered [0] indices into cluster_state
[2022-11-09T20:37:55,691][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [.watch-history-16] for index patterns [.watcher-history-16*]
[2022-11-09T20:37:55,700][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [ilm-history] for index patterns [ilm-history-5*]
[2022-11-09T20:37:55,707][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [.slm-history] for index patterns [.slm-history-5*]
[2022-11-09T20:37:55,718][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding component template [.deprecation-indexing-mappings]
[2022-11-09T20:37:55,723][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding component template [synthetics-mappings]
...
[2022-11-09T20:37:56,392][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [Prod] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
[2022-11-09T20:37:56,510][INFO ][o.e.l.LicenseService ] [Prod] license [4b5d6876-1402-470e-96fd-f9ff8211cca7] mode [basic] - valid
[2022-11-09T20:37:56,511][INFO ][o.e.x.s.a.Realms ] [Prod] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2022-11-09T20:37:56,538][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [Prod] Node [{Prod}{CvroQFRsTxKqyWfwcOJGag}] is selected as the current health node.
# and connection test is fine:
curl --cacert http_ca.crt -u elastic https://127.0.0.1:9200
Enter host password for user 'elastic':
{
"name" : "Prod",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "........",
"version" : {
"number" : "8.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "c94b4700cda13820dad5aa74fae6db185ca5c304",
"build_date" : "2022-10-24T16:54:16.433628434Z",
"build_snapshot" : false,
"lucene_version" : "9.4.1",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
Elasticsearch podman installation with bind-mount volumes (fails):
`podman run --detach --name es850 --publish 9200:9200
--volume=/etc/elasticsearch/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml :Z
--volume=/var/log/elasticsearch/elasticsearch.log:/var/log/elasticsearch/elasticsearch.log:Z
--volume=/app/elasticsearch/data:/app/elasticsearch/data:Z
--user=elasticadm localhost/elasticsearch_cust:1.4
podman logs es850
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
Aborting auto configuration because the node keystore contains password settings already
[2022-11-09T15:56:27,292][INFO ][o.e.n.Node ] [0d8414e9b51b] version[8.5.0], pid[76], build[rpm/c94b4700cda13820dad5aa74fae6db185ca5c304/2022-10-24T16:54:16.433628434Z], OS[Linux/4.18.0-348.7.1.el8_5.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/19/19+36-2238]
[2022-11-09T15:56:27,299][INFO ][o.e.n.Node ] [0d8414e9b51b] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-11-09T15:56:27,300][INFO ][o.e.n.Node ] [0d8414e9b51b] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-10492222574682252504, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1868m, -Xmx1868m, -XX:MaxDirectMemorySize=979369984, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-11-09T15:56:29,369][INFO ][c.a.c.i.j.JacksonVersion ] [0d8414e9b51b] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-11-09T15:56:30,863][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] loaded module [aggs-matrix-stats]
.............
[2022-11-09T15:56:30,880][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] loaded module [x-pack-watcher]
[2022-11-09T15:56:30,881][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] no plugins loaded
[2022-11-09T15:56:33,720][WARN ][stderr ] [0d8414e9b51b] Nov 09, 2022 3:56:33 PM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-09T15:56:33,721][WARN ][stderr ] [0d8414e9b51b] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-09T15:56:33,732][INFO ][o.e.e.NodeEnvironment ] [0d8414e9b51b] using [1] data paths, mounts [[/ (overlay)]], net usable_space [24gb], net total_space [27.8gb], types [overlay]
[2022-11-09T15:56:33,732][INFO ][o.e.e.NodeEnvironment ] [0d8414e9b51b] heap size [1.8gb], compressed ordinary object pointers [true]
[2022-11-09T15:56:33,740][INFO ][o.e.n.Node ] [0d8414e9b51b] node name [0d8414e9b51b], node ID [rMFgxntETo63opwgU7P9sg], cluster name [elasticsearch], roles [ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest, data_frozen]
**[2022-11-09T15:56:36,194][ERROR][o.e.b.Elasticsearch ] [0d8414e9b51b] fatal exception while booting Elasticsearch org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml : [xpack.security.transport.ssl.keystore.secure_password,xpack.security.transport.ssl.truststore.secure_password]**
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.validateServerConfiguration(SSLService.java:648)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.loadSslConfigurations(SSLService.java:612)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:156)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:465)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:314)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.lambda$new$15(Node.java:704)
at org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.lambda$flatMap$0(PluginsService.java:252)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575)
at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622)
at java.base/java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:719)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:316)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch.log
# Configuration is the following (elasticsearch.yml):
node.name: Prod # Name is 'Prod' but it's not a true production server
path.data: /app/elasticsearch/data
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
ingest.geoip.downloader.enabled: false
# Security:
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
http.host: 0.0.0.0
#transport.host: 0.0.0.0
$ podman exec -it es850 bash
[elasticadm#8a9ceb50b3b4 /]$ /usr/share/elasticsearch/bin/elasticsearch-keystore list
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
autoconfiguration.password_hash
keystore.seed
xpack.security.http.ssl.keystore.secure_password
xpack.security.transport.ssl.keystore.secure_password
xpack.security.transport.ssl.truststore.secure_password`
Any ideas / advise would be really appreciated because I don't know what's wrong suddenly with xpack.security parameters and the relationship with the podman bind-mount volume ?
These base xpack.security seem well configured (initial base configuration with no modification in a first time)

Error within Logstash for the ELK stack -- Unsure after days of debugging

The only non-commented out line of my logstash.conf is:
path.config: "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/config/logstash-sample.conf"
My only non commented out section is of my pipeline.yml:
- pipeline.id: log_files
#
# # The configuration string to be used by this pipeline
# config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
# # The path from where to read the configuration text
path.config: "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/config/logstash-sample.conf"
If I simply run logstash.bat within the bin:
[2021-10-11T14:40:51,933][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
If I run: logstash.bat -f C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\config\logstash-sample.conf
C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\bin>logstash.bat -f C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\config\logstash-sample.conf
"Using bundled JDK: ""
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logs which is now configured via log4j2.properties
[2021-10-11T14:38:51,311][INFO ][logstash.runner ] Log4j configuration path used is: C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\config\log4j2.properties
[2021-10-11T14:38:51,327][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.15.0", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [mswin32-x86_64]"}
[2021-10-11T14:38:51,421][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-10-11T14:38:53,230][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-10-11T14:38:53,465][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", [A-Za-z0-9_-], '\"', \"'\", [A-Za-z_], \"-\", [0-9], \"[\", \"{\" at line 13, column 37 (byte 250) after filter{\ncsv{\nseparator=>\",\"\ncolumns=>[\"ip\",\"date\",\"time\",\"zone\",", :backtrace=>["C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logstash-core/lib/logstash/agent.rb:391:in `block in converge_state'"]}
[2021-10-11T14:38:53,559][INFO ][logstash.runner ] Logstash shut down.
[2021-10-11T14:38:53,575][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.19.0.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.19.0.jar:?]
at C_3a_.ELK.logstash_minus_7_dot_15_dot_0_minus_windows_minus_x86_64.logstash_minus_7_dot_15_dot_0.lib.bootstrap.environment.<main>(C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\lib\bootstrap\environment.rb:94) ~[?:?]

Elasticserach not creating the Indice for new pipeline via logstash

I have set-up a ELK but I see elasticsearch not creating the Index and unable to upload the data, Service Elasticsearch and Logstash both are running..
Below is the details.. However I do not see anything on he logs.
Elastic config:
[root#aruba-elk2 rm_logs]# cat /etc/elasticsearch/elasticsearch.yml
# Elasticserach config
#########################
cluster.name: log-cohort-test
node.name: aruba-elk2
node.master: true
path:
data: /elk/lib/elasticsearch
logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
bootstrap.system_call_filter: False
[root#aruba-elk2 rm_logs]#
[root#aruba-elk2 rm_logs]#
LOGSTASH COnfig:
[root#aruba-elk2 rm_logs]# cat /etc/logstash/logstash.yml
path.data: /var/lib/logstash
path.logs: /var/log/logstash
[root#aruba-elk2 rm_logs]# cat /etc/logstash/conf.d/logstash-syslog.conf
input {
file {
path => [ "/elk/rm_logs/*.txt" ]
type => "rmlog"
}
}
filter {
if [type] == "rmlog" {
grok {
match => { "message" => "%{HOSTNAME:hostname},%{DATE:date},%{HOUR:hour1}:%{MINUTE:minute1},%{NUMBER}-%{WORD},%{USER:user},%{USER:user2} %{NUMBER:pid} %{NUMBER:float} %{NUMBER:float} %{NUMBER:number1} %{NUMBER:number2} %{DATA} %{HOUR:hour2}:%{MINUTE:minute2} %{HOUR:hour3}:%{MINUTE:minute3} %{GREEDYDATA:command},%{PATH:path}" }
add_field => [ "received_at", "%{#timestamp}" ]
}
}
}
output {
if [type] == "rmlog" {
elasticsearch {
hosts => ["aruba-elk2:9200"]
manage_template => false
index => "rmlog-%{+YYYY.MM.dd}"
#document_type => "messages"
}
}
}
Input data Source:
[root#aruba-elk2 rm_logs]# cd /elk/rm_logs/
[root#aruba-elk2 rm_logs]# ls -ltrh | head
total 2.6M
-rw-r--r-- 1 root root 558 Jan 11 11:27 dbxchw092.txt
-rw-r--r-- 1 root root 405 Jan 11 11:27 dbxtx220.txt
-rw-r--r-- 1 root root 241 Jan 11 11:27 dbxcvm139.txt
-rw-r--r-- 1 root root 455 Jan 11 11:27 dbxcnl038.txt
-rw-r--r-- 1 root root 230 Jan 11 11:27 dbxchw052.txt
-rw-r--r-- 1 root root 143 Jan 11 11:27 dbxtx222.txt
-rw-r--r-- 1 root root 577 Jan 11 11:27 dbxtx224.txt
-rw-r--r-- 1 root root 274 Jan 11 11:27 dbxcvm082.txt
-rw-r--r-- 1 root root 281 Jan 11 11:27 dbxcsb003.txt
Sample of above data file:
testhost-in2,19/01/11,06:34,04-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:00 rm -rf /test/ehf/users/arnav-090119-184844,/dv/ehf/users/arnav-090119-
testhost-in2,19/01/11,06:40,09-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:00 rm -rf /dv/ehf/users/arnav-090119-184844,/dv/ehf/users/arnav-090119-\
testhost-in2,19/01/11,06:45,14-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:01 rm -rf /
LOGS:
Logstash logs:
[root#aruba-elk2 logstash]# cat logstash-plain.log
[2019-01-12T23:48:31,653][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2019-01-12T23:48:34,959][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-01-12T23:48:35,374][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://aruba-elk2:9200/]}}
[2019-01-12T23:48:35,588][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://aruba-elk2:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://aruba-elk2:9200/][Manticore::SocketException] Connection refused"}
[2019-01-12T23:48:35,608][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//aruba-elk2:9200"]}
[2019-01-12T23:48:36,063][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_076330d5fd2c2b811bc1960a3d0547be", :path=>["/elk/rm_logs/*.txt"]}
[2019-01-12T23:48:36,095][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x424bb675 run>"}
[2019-01-12T23:48:36,155][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-01-12T23:48:36,156][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-12T23:48:36,542][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-12T23:48:40,796][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://aruba-elk2:9200/"}
[2019-01-12T23:48:40,855][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-01-12T23:48:40,859][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Elasticsearch LOGS:
[root#aruba-elk2 elasticsearch]# cat gc.log.0.current| tail
2019-01-13T00:13:29.280+0530: 1237.781: Total time for which application threads were stopped: 0.0002681 seconds, Stopping threads took: 0.0000316 seconds
2019-01-13T00:13:31.281+0530: 1239.782: Total time for which application threads were stopped: 0.0003670 seconds, Stopping threads took: 0.0000586 seconds
2019-01-13T00:13:32.281+0530: 1240.782: Total time for which application threads were stopped: 0.0003134 seconds, Stopping threads took: 0.0000708 seconds
2019-01-13T00:13:37.282+0530: 1245.783: Total time for which application threads were stopped: 0.0004663 seconds, Stopping threads took: 0.0001315 seconds
2019-01-13T00:13:51.284+0530: 1259.785: Total time for which application threads were stopped: 0.0004230 seconds, Stopping threads took: 0.0000691 seconds
2019-01-13T00:13:57.286+0530: 1265.787: Total time for which application threads were stopped: 0.0008421 seconds, Stopping threads took: 0.0002697 seconds
2019-01-13T00:13:58.287+0530: 1266.787: Total time for which application threads were stopped: 0.0004467 seconds, Stopping threads took: 0.0000706 seconds
2019-01-13T00:14:11.288+0530: 1279.789: Total time for which application threads were stopped: 0.0004702 seconds, Stopping threads took: 0.0001105 seconds
2019-01-13T00:14:18.289+0530: 1286.790: Total time for which application threads were stopped: 0.0004123 seconds, Stopping threads took: 0.0000750 seconds
Any help will be appreciated..

validating Logstash configuration

I am trying to validate my logstash configuration.
Using :
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings -t -f /etc/logstash/conf.d
I received the following error:
penJDK 64-Bit Server VM warning: If the number of processors is
expected to increase from one, then you should configure the number of
parallel GC threads appropriately using -XX:ParallelGCThreads=N
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using
--path.settings. Continuing using the defaults Could not find log4j2 configuration at path /tmp/hsperfdata_logstash/-t/log4j2.properties.
Using default config which logs errors to the console [INFO ]
2018-10-09 14:56:50.240 [main] scaffold - Initializing module
{:module_name=>"fb_apache",
:directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-10-09 14:56:50.265 [main] scaffold - Initializing module
{:module_name=>"netflow",
:directory=>"/usr/share/logstash/modules/netflow/configuration"} [INFO
] 2018-10-09 14:56:50.378 [main] writabledirectory - Creating
directory {:setting=>"path.queue",
:path=>"/usr/share/logstash/data/queue"} [INFO ] 2018-10-09
14:56:50.380 [main] writabledirectory - Creating directory
{:setting=>"path.dead_letter_queue",
:path=>"/usr/share/logstash/data/dead_letter_queue"} [WARN ]
2018-10-09 14:56:51.099 [LogStash::Runner] multilocal - Ignoring the
'pipelines.yml' file because modules or command line options are
specified [INFO ] 2018-10-09 14:56:51.126 [LogStash::Runner] agent -
No persistent UUID file found. Generating new UUID
{:uuid=>"80207611-d5b8-47dd-b229-23c2ade385ae",
:path=>"/usr/share/logstash/data/uuid"} [INFO ] 2018-10-09
14:56:51.568 [LogStash::Runner] runner - Starting Logstash
{"logstash.version"=>"6.2.4"} [INFO ] 2018-10-09 14:56:52.021 [Api
Webserver] agent - Successfully started Logstash API endpoint
{:port=>9600} [ERROR] 2018-10-09 14:56:53.586 [Ruby-0-Thread-1:
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
beats - Invalid setting for beats input plugin:
input {
beats {
# This setting must be a path
# File does not exist or cannot be opened /etc/pki/tls/certs/logstash-forwarder.crt
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
...
} } [ERROR] 2018-10-09 14:56:53.588 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
beats - Invalid setting for beats input plugin:
input {
beats {
# This setting must be a path
# File does not exist or cannot be opened /etc/pki/tls/private/logstash-forwarder.key
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
...
} } [ERROR] 2018-10-09 14:56:53.644 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
agent - Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Something is
wrong with your configuration.",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:89:in
config_init'",
"/usr/share/logstash/logstash-core/lib/logstash/inputs/base.rb:62:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/plugins/plugin_factory.rb:89:in
plugin'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:112:in
plugin'", "(eval):8:in <eval>'", "org/jruby/RubyKernel.java:994:in
eval'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:84:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in
execute'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:inblock
in converge_state'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:inblock
in converge_state'", "org/jruby/RubyArray.java:1734:in each'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in
converge_state'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in block
in converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in
converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in
execute'",
"/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:in
block in execute'",
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in
block in initialize'"]}
I would appreciate any help with this.
Please check logstash.yml file available on /etc/logstash ? If it available stop the logstash service and kill if any processes ruining on background. Save your config file on /etc/logstash/conf.d/your_file.conf. To run the config test go to, logstash bin directory and run
./logstash -f /etc/logstash/conf.d/your_config_file.conf --config.test_and_exit

Connection refused elasticsearch

Trying to do a "curl http://localhost:9200" but getting "Failed connection refused" Firewalld is off and elasticsearch.yml file settings are set to default. Below is a portion of the yml file.
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/log/elasticsearch
#
# Path to log files:
#
path.logs: /var/data/elasticsearch
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
Below is a tail of the elasticsearch.log file:
[2018-03-29T07:06:02,094][INFO ][o.e.c.s.MasterService ] [TBin_UP] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}
[2018-03-29T07:06:02,105][INFO ][o.e.c.s.ClusterApplierService] [TBin_UP] new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}, reason: apply cluster state (from master [master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-03-29T07:06:02,148][INFO ][o.e.g.GatewayService ] [TBin_UP] recovered [0] indices into cluster_state
[2018-03-29T07:06:02,155][INFO ][o.e.h.n.Netty4HttpServerTransport] [TBin_UP] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-03-29T07:06:02,155][INFO ][o.e.n.Node ] [TBin_UP] started
[2018-03-29T07:06:02,445][INFO ][o.e.m.j.JvmGcMonitorService] [TBin_UP] [gc][14] overhead, spent [300ms] collecting in the last [1s]
[2018-03-29T07:14:50,259][INFO ][o.e.n.Node ] [TBin_UP] stopping ...
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] stopped
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] closing ...
[2018-03-29T07:14:50,620][INFO ][o.e.n.Node ] [TBin_UP] closed
Service status:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-29 08:05:46 EDT; 2min 38s ago
Docs: http://www.elastic.co
Process: 22384 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 22384 (code=exited, status=1/FAILURE)
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,668 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,670 main ERROR Unable to locate appender "rolling" for logger config "root"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logger config "index.indexing.slowlog.index"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger config "index.search.slowlog"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,672 main ERROR Unable to locate appender "deprecation_rolling" for logger config "org.elasticsearch.deprecation"
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Mar 29 08:05:46 satyr systemd[1]: Unit elasticsearch.service entered failed state.
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service failed.

Resources