validating Logstash configuration - elasticsearch

I am trying to validate my logstash configuration.
Using :
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings -t -f /etc/logstash/conf.d
I received the following error:
penJDK 64-Bit Server VM warning: If the number of processors is
expected to increase from one, then you should configure the number of
parallel GC threads appropriately using -XX:ParallelGCThreads=N
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using
--path.settings. Continuing using the defaults Could not find log4j2 configuration at path /tmp/hsperfdata_logstash/-t/log4j2.properties.
Using default config which logs errors to the console [INFO ]
2018-10-09 14:56:50.240 [main] scaffold - Initializing module
{:module_name=>"fb_apache",
:directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-10-09 14:56:50.265 [main] scaffold - Initializing module
{:module_name=>"netflow",
:directory=>"/usr/share/logstash/modules/netflow/configuration"} [INFO
] 2018-10-09 14:56:50.378 [main] writabledirectory - Creating
directory {:setting=>"path.queue",
:path=>"/usr/share/logstash/data/queue"} [INFO ] 2018-10-09
14:56:50.380 [main] writabledirectory - Creating directory
{:setting=>"path.dead_letter_queue",
:path=>"/usr/share/logstash/data/dead_letter_queue"} [WARN ]
2018-10-09 14:56:51.099 [LogStash::Runner] multilocal - Ignoring the
'pipelines.yml' file because modules or command line options are
specified [INFO ] 2018-10-09 14:56:51.126 [LogStash::Runner] agent -
No persistent UUID file found. Generating new UUID
{:uuid=>"80207611-d5b8-47dd-b229-23c2ade385ae",
:path=>"/usr/share/logstash/data/uuid"} [INFO ] 2018-10-09
14:56:51.568 [LogStash::Runner] runner - Starting Logstash
{"logstash.version"=>"6.2.4"} [INFO ] 2018-10-09 14:56:52.021 [Api
Webserver] agent - Successfully started Logstash API endpoint
{:port=>9600} [ERROR] 2018-10-09 14:56:53.586 [Ruby-0-Thread-1:
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
beats - Invalid setting for beats input plugin:
input {
beats {
# This setting must be a path
# File does not exist or cannot be opened /etc/pki/tls/certs/logstash-forwarder.crt
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
...
} } [ERROR] 2018-10-09 14:56:53.588 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
beats - Invalid setting for beats input plugin:
input {
beats {
# This setting must be a path
# File does not exist or cannot be opened /etc/pki/tls/private/logstash-forwarder.key
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
...
} } [ERROR] 2018-10-09 14:56:53.644 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
agent - Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Something is
wrong with your configuration.",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:89:in
config_init'",
"/usr/share/logstash/logstash-core/lib/logstash/inputs/base.rb:62:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/plugins/plugin_factory.rb:89:in
plugin'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:112:in
plugin'", "(eval):8:in <eval>'", "org/jruby/RubyKernel.java:994:in
eval'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:84:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in
execute'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:inblock
in converge_state'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:inblock
in converge_state'", "org/jruby/RubyArray.java:1734:in each'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in
converge_state'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in block
in converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in
converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in
execute'",
"/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:in
block in execute'",
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in
block in initialize'"]}
I would appreciate any help with this.

Please check logstash.yml file available on /etc/logstash ? If it available stop the logstash service and kill if any processes ruining on background. Save your config file on /etc/logstash/conf.d/your_file.conf. To run the config test go to, logstash bin directory and run
./logstash -f /etc/logstash/conf.d/your_config_file.conf --config.test_and_exit

Related

Not able to use bind-mount volumes with Elasticsearch used in a podman container

I'm new at Elasticsearch (ES) and I'm currently set a customized podman container ES 8.5.0 installation (rootless install) from ES base RPM repository
In this installation I'm using a dedicated Linux user 'elasticadm' which owns the files into the container and over the local Red Hat Linux 8.5 host
Basically I use the following ownership for the installation on localhost:
/app/elasticsearch/data - /var/log/elasticsearch/elasticsearch.log - /etc/elasticsearch/elasticsearch.yml:
elasticadm: elasticsearch - then after the below error occurred I tried: elasticadm:root (but with no more success)
Whenever I run a Elasticsearch podman container with any mount-bind volumes the installation fails with the following error message
"
Fatal exception while booting Elasticsearch org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml
"
ES podman installation without mount-bind volumes is fine but has no interest of course
I'm able to deploy the container without any bind-mount volumes.
podman run --detach --name es850 --publish 9200:9200 --user=elasticadm localhost/elasticsearch_cust:1.4
podman logs es850
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
[2022-11-09T20:37:41,777][INFO ][o.e.n.Node ] [Prod] version[8.5.0], pid[72], build[rpm/c94b4700cda13820dad5aa74fae6db185ca5c304/2022-10-24T16:54:16.433628434Z], OS[Linux/4.18.0-348.7.1.el8_5.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/19/19+36-2238]
[2022-11-09T20:37:41,782][INFO ][o.e.n.Node ] [Prod] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-11-09T20:37:41,783][INFO ][o.e.n.Node ] [Prod] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-5358173424819503746, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1868m, -Xmx1868m, -XX:MaxDirectMemorySize=979369984, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-11-09T20:37:43,721][INFO ][c.a.c.i.j.JacksonVersion ] [Prod] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-11-09T20:37:45,175][INFO ][o.e.p.PluginsService ] [Prod] loaded module [aggs-matrix-stats]
[2022-11-09T20:37:45,175][INFO ][o.e.p.PluginsService ] [Prod] loaded module [analysis-common]
[2022-11-09T20:37:45,176][INFO ][o.e.p.PluginsService ] [Prod] loaded module [apm]
......
[2022-11-09T20:37:45,190][INFO ][o.e.p.PluginsService ] [Prod] loaded module [x-pack-watcher]
[2022-11-09T20:37:45,191][INFO ][o.e.p.PluginsService ] [Prod] no plugins loaded
[2022-11-09T20:37:48,027][WARN ][stderr ] [Prod] Nov 09, 2022 8:37:48 PM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-09T20:37:48,028][WARN ][stderr ] [Prod] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-09T20:37:48,039][INFO ][o.e.e.NodeEnvironment ] [Prod] using [1] data paths, mounts [[/ (overlay)]], net usable_space [24gb], net total_space [27.8gb], types [overlay]
[2022-11-09T20:37:48,039][INFO ][o.e.e.NodeEnvironment ] [Prod] heap size [1.8gb], compressed ordinary object pointers [true]
[2022-11-09T20:37:48,048][INFO ][o.e.n.Node ] [Prod] node name [Prod], node ID [CvroQFRsTxKqyWfwcOJGag], cluster name [elasticsearch], roles [data_frozen, ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest]
[2022-11-09T20:37:51,831][INFO ][o.e.x.s.Security ] [Prod] Security is enabled
[2022-11-09T20:37:52,214][INFO ][o.e.x.s.a.s.FileRolesStore] [Prod] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2022-11-09T20:37:52,628][INFO ][o.e.x.s.InitialNodeSecurityAutoConfiguration] [Prod] Auto-configuration will not generate a password for the elastic built-in superuser, as we cannot determine if there is a terminal attached to the elasticsearch process. You can use the `bin/elasticsearch-reset-password` tool to set the password for the elastic user.
[2022-11-09T20:37:52,724][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [Prod] [controller/96] [Main.cc#123] controller (64 bit): Version 8.5.0 (Build 3922fab346e761) Copyright (c) 2022 Elasticsearch BV
[2022-11-09T20:37:53,354][INFO ][o.e.t.n.NettyAllocator ] [Prod] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-11-09T20:37:53,381][INFO ][o.e.i.r.RecoverySettings ] [Prod] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2022-11-09T20:37:53,425][INFO ][o.e.d.DiscoveryModule ] [Prod] using discovery type [single-node] and seed hosts providers [settings]
[2022-11-09T20:37:54,888][INFO ][o.e.n.Node ] [Prod] initialized
[2022-11-09T20:37:54,889][INFO ][o.e.n.Node ] [Prod] starting ...
[2022-11-09T20:37:54,901][INFO ][o.e.x.s.c.f.PersistentCache] [Prod] persistent cache index loaded
[2022-11-09T20:37:54,903][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [Prod] deprecation component started
[2022-11-09T20:37:55,011][INFO ][o.e.t.TransportService ] [Prod] publish_address {10.0.2.100:9300}, bound_addresses {[::]:9300}
[2022-11-09T20:37:55,122][WARN ][o.e.b.BootstrapChecks ] [Prod] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2022-11-09T20:37:55,124][INFO ][o.e.c.c.ClusterBootstrapService] [Prod] this node has not joined a bootstrapped cluster yet; [cluster.initial_master_nodes] is set to [Prod]
[2022-11-09T20:37:55,133][INFO ][o.e.c.c.Coordinator ] [Prod] setting initial configuration to VotingConfiguration{CvroQFRsTxKqyWfwcOJGag}
[2022-11-09T20:37:55,327][INFO ][o.e.c.s.MasterService ] [Prod] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw} completing election], term: 1, version: 1, delta: master node changed {previous [], current [{Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}]}
[2022-11-09T20:37:55,352][INFO ][o.e.c.c.CoordinationState] [Prod] cluster UUID set to [_wcBh4-JRtuLqIBXyNhZ5A]
[2022-11-09T20:37:55,370][INFO ][o.e.c.s.ClusterApplierService] [Prod] master node changed {previous [], current [{Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
[2022-11-09T20:37:55,439][INFO ][o.e.r.s.FileSettingsService] [Prod] starting file settings watcher ...
[2022-11-09T20:37:55,447][INFO ][o.e.r.s.FileSettingsService] [Prod] file settings service up and running [tid=51]
[2022-11-09T20:37:55,456][INFO ][o.e.h.AbstractHttpServerTransport] [Prod] publish_address {10.0.2.100:9200}, bound_addresses {[::]:9200}
[2022-11-09T20:37:55,457][INFO ][o.e.n.Node ] [Prod] started {Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}{ml.max_jvm_size=1958739968, ml.allocated_processors_double=4.0, xpack.installed=true, ml.machine_memory=3917570048, ml.allocated_processors=4}
[2022-11-09T20:37:55,510][INFO ][o.e.g.GatewayService ] [Prod] recovered [0] indices into cluster_state
[2022-11-09T20:37:55,691][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [.watch-history-16] for index patterns [.watcher-history-16*]
[2022-11-09T20:37:55,700][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [ilm-history] for index patterns [ilm-history-5*]
[2022-11-09T20:37:55,707][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [.slm-history] for index patterns [.slm-history-5*]
[2022-11-09T20:37:55,718][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding component template [.deprecation-indexing-mappings]
[2022-11-09T20:37:55,723][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding component template [synthetics-mappings]
...
[2022-11-09T20:37:56,392][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [Prod] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
[2022-11-09T20:37:56,510][INFO ][o.e.l.LicenseService ] [Prod] license [4b5d6876-1402-470e-96fd-f9ff8211cca7] mode [basic] - valid
[2022-11-09T20:37:56,511][INFO ][o.e.x.s.a.Realms ] [Prod] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2022-11-09T20:37:56,538][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [Prod] Node [{Prod}{CvroQFRsTxKqyWfwcOJGag}] is selected as the current health node.
# and connection test is fine:
curl --cacert http_ca.crt -u elastic https://127.0.0.1:9200
Enter host password for user 'elastic':
{
"name" : "Prod",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "........",
"version" : {
"number" : "8.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "c94b4700cda13820dad5aa74fae6db185ca5c304",
"build_date" : "2022-10-24T16:54:16.433628434Z",
"build_snapshot" : false,
"lucene_version" : "9.4.1",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
Elasticsearch podman installation with bind-mount volumes (fails):
`podman run --detach --name es850 --publish 9200:9200
--volume=/etc/elasticsearch/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml :Z
--volume=/var/log/elasticsearch/elasticsearch.log:/var/log/elasticsearch/elasticsearch.log:Z
--volume=/app/elasticsearch/data:/app/elasticsearch/data:Z
--user=elasticadm localhost/elasticsearch_cust:1.4
podman logs es850
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
Aborting auto configuration because the node keystore contains password settings already
[2022-11-09T15:56:27,292][INFO ][o.e.n.Node ] [0d8414e9b51b] version[8.5.0], pid[76], build[rpm/c94b4700cda13820dad5aa74fae6db185ca5c304/2022-10-24T16:54:16.433628434Z], OS[Linux/4.18.0-348.7.1.el8_5.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/19/19+36-2238]
[2022-11-09T15:56:27,299][INFO ][o.e.n.Node ] [0d8414e9b51b] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-11-09T15:56:27,300][INFO ][o.e.n.Node ] [0d8414e9b51b] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-10492222574682252504, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1868m, -Xmx1868m, -XX:MaxDirectMemorySize=979369984, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-11-09T15:56:29,369][INFO ][c.a.c.i.j.JacksonVersion ] [0d8414e9b51b] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-11-09T15:56:30,863][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] loaded module [aggs-matrix-stats]
.............
[2022-11-09T15:56:30,880][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] loaded module [x-pack-watcher]
[2022-11-09T15:56:30,881][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] no plugins loaded
[2022-11-09T15:56:33,720][WARN ][stderr ] [0d8414e9b51b] Nov 09, 2022 3:56:33 PM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-09T15:56:33,721][WARN ][stderr ] [0d8414e9b51b] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-09T15:56:33,732][INFO ][o.e.e.NodeEnvironment ] [0d8414e9b51b] using [1] data paths, mounts [[/ (overlay)]], net usable_space [24gb], net total_space [27.8gb], types [overlay]
[2022-11-09T15:56:33,732][INFO ][o.e.e.NodeEnvironment ] [0d8414e9b51b] heap size [1.8gb], compressed ordinary object pointers [true]
[2022-11-09T15:56:33,740][INFO ][o.e.n.Node ] [0d8414e9b51b] node name [0d8414e9b51b], node ID [rMFgxntETo63opwgU7P9sg], cluster name [elasticsearch], roles [ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest, data_frozen]
**[2022-11-09T15:56:36,194][ERROR][o.e.b.Elasticsearch ] [0d8414e9b51b] fatal exception while booting Elasticsearch org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml : [xpack.security.transport.ssl.keystore.secure_password,xpack.security.transport.ssl.truststore.secure_password]**
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.validateServerConfiguration(SSLService.java:648)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.loadSslConfigurations(SSLService.java:612)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:156)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:465)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:314)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.lambda$new$15(Node.java:704)
at org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.lambda$flatMap$0(PluginsService.java:252)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575)
at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622)
at java.base/java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:719)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:316)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch.log
# Configuration is the following (elasticsearch.yml):
node.name: Prod # Name is 'Prod' but it's not a true production server
path.data: /app/elasticsearch/data
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
ingest.geoip.downloader.enabled: false
# Security:
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
http.host: 0.0.0.0
#transport.host: 0.0.0.0
$ podman exec -it es850 bash
[elasticadm#8a9ceb50b3b4 /]$ /usr/share/elasticsearch/bin/elasticsearch-keystore list
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
autoconfiguration.password_hash
keystore.seed
xpack.security.http.ssl.keystore.secure_password
xpack.security.transport.ssl.keystore.secure_password
xpack.security.transport.ssl.truststore.secure_password`
Any ideas / advise would be really appreciated because I don't know what's wrong suddenly with xpack.security parameters and the relationship with the podman bind-mount volume ?
These base xpack.security seem well configured (initial base configuration with no modification in a first time)

Error within Logstash for the ELK stack -- Unsure after days of debugging

The only non-commented out line of my logstash.conf is:
path.config: "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/config/logstash-sample.conf"
My only non commented out section is of my pipeline.yml:
- pipeline.id: log_files
#
# # The configuration string to be used by this pipeline
# config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
# # The path from where to read the configuration text
path.config: "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/config/logstash-sample.conf"
If I simply run logstash.bat within the bin:
[2021-10-11T14:40:51,933][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
If I run: logstash.bat -f C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\config\logstash-sample.conf
C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\bin>logstash.bat -f C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\config\logstash-sample.conf
"Using bundled JDK: ""
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logs which is now configured via log4j2.properties
[2021-10-11T14:38:51,311][INFO ][logstash.runner ] Log4j configuration path used is: C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\config\log4j2.properties
[2021-10-11T14:38:51,327][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.15.0", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [mswin32-x86_64]"}
[2021-10-11T14:38:51,421][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-10-11T14:38:53,230][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-10-11T14:38:53,465][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", [A-Za-z0-9_-], '\"', \"'\", [A-Za-z_], \"-\", [0-9], \"[\", \"{\" at line 13, column 37 (byte 250) after filter{\ncsv{\nseparator=>\",\"\ncolumns=>[\"ip\",\"date\",\"time\",\"zone\",", :backtrace=>["C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "C:/ELK/logstash-7.15.0-windows-x86_64/logstash-7.15.0/logstash-core/lib/logstash/agent.rb:391:in `block in converge_state'"]}
[2021-10-11T14:38:53,559][INFO ][logstash.runner ] Logstash shut down.
[2021-10-11T14:38:53,575][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.19.0.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.19.0.jar:?]
at C_3a_.ELK.logstash_minus_7_dot_15_dot_0_minus_windows_minus_x86_64.logstash_minus_7_dot_15_dot_0.lib.bootstrap.environment.<main>(C:\ELK\logstash-7.15.0-windows-x86_64\logstash-7.15.0\lib\bootstrap\environment.rb:94) ~[?:?]

LogStash Configuration issue

I am a novice in the world of LogStash. Just started to learn it. I tried to create a config file called Unhealthy_data.config using data from a similarly named csv file.
The contents of my config file are as below: -
input{
file{
path => "D:/01_Users/LogStash/Unhealthy.csv"
start_position => "beginning"
}
filter{
csv{
separator => ","
columns => ["cluster_name","unhealthy_nodes","userid","applicationid","queue","application_type","impact_host","cluster_utilization","queue_utilization","running_containers","running_memory","elapsed_time","tech_datestamp"]
}
}
output{
elasticsearch{
hosts =>"http://localhost:9200"
index => "unhealthy"
document_type => "unhealthy_data"
}
stdout{}
}
}
The last column "tech_datestamp" is a Date column.
I am unable to load the data and get the error as below: -
C:\ELK\logstash-7.9.1\bin>logstash -f C:\ELK\LogStash\UnhealthyData.config
Sending Logstash logs to C:/ELK/logstash-7.9.1/logs which is now configured via
log4j2.properties
[2020-11-28T07:33:35,924][INFO ][logstash.runner ] Starting Logstash {"
logstash.version"=>"7.9.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03
9a89c94bcc Java HotSpot(TM) 64-Bit Server VM 25.271-b09 on 1.8.0_271-b09 +indy +
jit [mswin32-x86_64]"}
[2020-11-28T07:33:36,158][WARN ][logstash.config.source.multilocal] Ignoring the
'pipelines.yml' file because modules or command line options are specified
[2020-11-28T07:33:37,058][ERROR][logstash.agent ] Failed to execute ac
tion {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"L
ogStash::ConfigurationError", :message=>"Expected one of [A-Za-z0-9_-], [ \\t\\r
\\n], \"#\", \"=>\" at line 7, column 6 (byte 131) after input{\r\n\tfile{\r\n\t
\tpath => \"D:/01_Users/LogStash/Unhealthy.csv\"\r\n\t\tstart_posi
tion => \"beginning\"\r\n\t}\r\n\tfilter{\r\n\t\tcsv", :backtrace=>["C:/ELK/logs
tash-7.9.1/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "
org/logstash/execution/AbstractPipelineExt.java:183:in `initialize'", "org/logst
ash/execution/JavaBasePipelineExt.java:69:in `initialize'", "C:/ELK/logstash-7.9
.1/logstash-core/lib/logstash/java_pipeline.rb:44:in `initialize'", "C:/ELK/logs
tash-7.9.1/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'"
, "C:/ELK/logstash-7.9.1/logstash-core/lib/logstash/agent.rb:357:in `block in co
nverge_state'"]}
[2020-11-28T07:33:37,355][INFO ][logstash.agent ] Successfully started
Logstash API endpoint {:port=>9600}
[2020-11-28T07:33:42,306][INFO ][logstash.runner ] Logstash shut down.
[2020-11-28T07:33:42,328][ERROR][org.logstash.Logstash ] java.lang.IllegalSta
teException: Logstash stopped processing because of an error: (SystemExit) exit
Request
You have not closed your input section before opening your filter section. As a result, the logstash configuration compiler interpets the csv filter as a csv input
Try moving the final } to after the filter section.

how to configure the elasticserch.yml for repository-hdfs plugin of elasticsearch

elasticsearch 2.3.2
repository-hdfs 2.3.1
I configure the elasticsearch.yml file as the elastic official
repositories
hdfs:
uri: "hdfs://<host>:<port>/" # optional - Hadoop file-system URI
path: "some/path" # required - path with the file-system where data is stored/loaded
load_defaults: "true" # optional - whether to load the default Hadoop configuration (default) or not
conf_location: "extra-cfg.xml" # optional - Hadoop
configuration XML to be loaded (use commas for multi values)
conf.<key> : "<value>" # optional - 'inlined' key=value added to the Hadoop configuration
concurrent_streams: 5 # optional - the number of concurrent streams (defaults to 5)
compress: "false" # optional - whether to compress the metadata or not (default)
chunk_size: "10mb" # optional - chunk size (disabled by default)
but it raise Exception ,the format is incorrect
error info :
Exception in thread "main" SettingsException
[Failed to load settings from [elasticsearch.yml]]; nested: ScannerException[while scanning a simple key'
in 'reader', line 99, column 2:
repositories
^
could not find expected ':'
in 'reader', line 100, column 10:
hdfs:
^];
Likely root cause: while scanning a simple key
in 'reader', line 99, column 2:
repositories
^
could not find expected ':'
in 'reader', line 100, column 10:
hdfs:
I edit it as:
repositories:
hdfs:
uri: "hdfs://191.168.4.220:9600/"
but it doesn't work
I want know what the format is.
I find the aws configure for elasticsearch.xml
cloud:
aws:
access_key: AKVAIQBF2RECL7FJWGJQ
secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br
repositories:
s3:
bucket: "bucket_name"
region: "us-west-2"
private-bucket:
bucket: <bucket not accessible by default key>
access_key: <access key>
secret_key: <secret key>
remote-bucket:
bucket: <bucket in other region>
region: <region>
external-bucket:
bucket: <bucket>
access_key: <access key>
secret_key: <secret key>
endpoint: <endpoint>
protocol: <protocol>
I imitate it,but still doesn't work
I try to install repository-hdfs 2.3.1 in elasticsearch 2.3.2 ,but failed :
ERROR: Plugin [repository-hdfs] is incompatible with Elasticsearch [2.3.2]. Was designed for version [2.3.1]
The plugin can be only installed in elasticsearch 2.3.1.
You should specify uri,path,conf_location option and maybe delete conf.key option. Take the following config as an example.
security.manager.enabled: false
repositories.hdfs:
uri: "hdfs://master:9000" # optional - Hadoop file-system URI
path: "/aaa/bbb" # required - path with the file-system where data is stored/loaded
load_defaults: "true" # optional - whether to load the default Hadoop configuration (default) or not
conf_location: "/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/core-site.xml,/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/hdfs-site.xml" # optional - Hadoop configuration XML to be loaded (use commas for multi values)
concurrent_streams: 5 # optional - the number of concurrent streams (defaults to 5)
compress: "false" # optional - whether to compress the metadata or not (default)
chunk_size: "10mb" # optional - chunk size (disabled by default)
I start es successfully:
[----#----------- elasticsearch-2.3.1]$ bin/elasticsearch
[2016-05-06 04:40:58,173][INFO ][node ] [Protector] version[2.3.1], pid[17641], build[bd98092/2016-04-04T12:25:05Z]
[2016-05-06 04:40:58,174][INFO ][node ] [Protector] initializing ...
[2016-05-06 04:40:58,830][INFO ][plugins ] [Protector] modules [reindex, lang-expression, lang-groovy], plugins [repository-hdfs], sites []
[2016-05-06 04:40:58,863][INFO ][env ] [Protector] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8gb], net total_space [9.9gb], spins? [unknown], types [rootfs]
[2016-05-06 04:40:58,863][INFO ][env ] [Protector] heap size [1007.3mb], compressed ordinary object pointers [true]
[2016-05-06 04:40:58,863][WARN ][env ] [Protector] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-05-06 04:40:59,192][INFO ][plugin.hadoop.hdfs ] Loaded Hadoop [1.2.1] libraries from file:/home/ec2-user/app/elasticsearch-2.3.1/plugins/repository-hdfs/
[2016-05-06 04:41:01,598][INFO ][node ] [Protector] initialized
[2016-05-06 04:41:01,598][INFO ][node ] [Protector] starting ...
[2016-05-06 04:41:01,823][INFO ][transport ] [Protector] publish_address {xxxxxxxxx:9300}, bound_addresses {xxxxxxx:9300}
[2016-05-06 04:41:01,830][INFO ][discovery ] [Protector] hdfs/9H8wli0oR3-Zp-M9ZFhNUQ
[2016-05-06 04:41:04,886][INFO ][cluster.service ] [Protector] new_master {Protector}{9H8wli0oR3-Zp-M9ZFhNUQ}{xxxxxxx}{xxxxx:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-05-06 04:41:04,908][INFO ][http ] [Protector] publish_address {xxxxxxxxx:9200}, bound_addresses {xxxxxxx:9200}
[2016-05-06 04:41:04,908][INFO ][node ] [Protector] started
[2016-05-06 04:41:05,415][INFO ][gateway ] [Protector] recovered [1] indices into cluster_state
[2016-05-06 04:41:06,097][INFO ][cluster.routing.allocation] [Protector] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[website][0], [website][0]] ...]).
But ,when i try to create a snapshot :
PUT /_snapshot/my_backup
{
"type": "hdfs",
"settings": {
"path":"/aaa/bbb/"
}
}
i get the following error:
Caused by: java.io.IOException: Mkdirs failed to create file:/aaa/bbb/tests-zTkKRtoZTLu3m3RLascc1w

ins-20802 - oracle net configuration assistant failed during installation - centos 7

Hello I am trying to folow the manual for installing the Oracle 12c. Actually it was already installed on the machine, and then deinstalled.
During installiation I get the "[ins-20802] oracle net configuration assistant failed during installation" error window. And proposed detail log file, where I can see:
INFO: ... GenericInternalPlugIn: starting read loop.
INFO: Read:
WARNING: Skipping line:
INFO: End of argument passing to stdin
INFO: Read: Parsing command line arguments:
WARNING: Skipping line: Parsing command line arguments:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "orahome" = /u01/app/oracle/product/12.1.0/db_1
WARNING: Skipping line: Parameter "orahome" = /u01/app/oracle/product/12.1.0/db_1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "orahnam" = OraDB12Home1
WARNING: Skipping line: Parameter "orahnam" = OraDB12Home1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "instype" = typical
WARNING: Skipping line: Parameter "instype" = typical
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "inscomp" = client,oraclenet,javavm,server,ano
WARNING: Skipping line: Parameter "inscomp" = client,oraclenet,javavm,server,ano
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "insprtcl" = tcp
WARNING: Skipping line: Parameter "insprtcl" = tcp
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "cfg" = local
WARNING: Skipping line: Parameter "cfg" = local
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "authadp" = NO_VALUE
WARNING: Skipping line: Parameter "authadp" = NO_VALUE
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "responsefile" = /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
WARNING: Skipping line: Parameter "responsefile" = /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "silent" = true
WARNING: Skipping line: Parameter "silent" = true
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "ouiinternal" = true
WARNING: Skipping line: Parameter "ouiinternal" = true
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Done parsing command line arguments.
WARNING: Skipping line: Done parsing command line arguments.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Services Configuration:
WARNING: Skipping line: Oracle Net Services Configuration:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Profile configuration complete.
WARNING: Skipping line: Profile configuration complete.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Listener Startup:
WARNING: Skipping line: Oracle Net Listener Startup:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Running Listener Control:
WARNING: Skipping line: Running Listener Control:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: /u01/app/oracle/product/12.1.0/db_1/bin/lsnrctl start LISTENER
WARNING: Skipping line: /u01/app/oracle/product/12.1.0/db_1/bin/lsnrctl start LISTENER
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Listener Control complete.
WARNING: Skipping line: Listener Control complete.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Listener start failed.
WARNING: Skipping line: Listener start failed.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Check the trace file for details: /u01/app/oracle/cfgtoollogs/netca/trace_OraDB12Home1-1504033PM3901.log
WARNING: Skipping line: Check the trace file for details: /u01/app/oracle/cfgtoollogs/netca/trace_OraDB12Home1-1504033PM3901.log
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Services configuration failed. The exit code is 1
WARNING: Skipping line: Oracle Net Services configuration failed. The exit code is 1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Completed Plugin named: Oracle Net Configuration Assistant
Ans the corresponding trace_OraDB12Home1-1504033PM3901.log:
[main] [ 2015-04-03 15:39:06.329 MSK ] [OracleHome.getVersion:1059] Current Version From Inventory: 12.1.0.2.0
[main] [ 2015-04-03 15:39:06.329 MSK ] [InitialSetup.<init>:4151] Admin location is: /u01/app/oracle/product/12.1.0/db_1/network/admin
[main] [ 2015-04-03 15:39:06.718 MSK ] [ConfigureProfile.setProfileParam:140] Setting NAMES.DIRECTORY_PATH: (TNSNAMES, EZCONNECT)
[main] [ 2015-04-03 15:39:06.735 MSK ] [HAUtils.getCurrentOracleHome:593] Oracle home from system property: /u01/app/oracle/product/12.1.0/db_1
[main] [ 2015-04-03 15:39:06.735 MSK ] [HAUtils.getConfiguredGridHome:1343] ----- Getting CRS HOME ----
[main] [ 2015-04-03 15:39:06.737 MSK ] [UnixSystem.getCRSHome:2878] olrFileName = /etc/oracle/olr.loc
[main] [ 2015-04-03 15:39:06.795 MSK ] [HAUtils.getHASHome:1500] Failed to get HAS home.
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
[main] [ 2015-04-03 15:39:06.795 MSK ] [InitialSetup.checkHAConfiguration:4808] HA Server is NOT configured.
[main] [ 2015-04-03 15:39:06.797 MSK ] [NetCAResponseFile.<init>:75] Response file initialized: /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
[main] [ 2015-04-03 15:39:06.798 MSK ] [NetCAResponseFile.getInstalledComponents:114] Installed components from response file: server, net8, javavm
[main] [ 2015-04-03 15:39:06.798 MSK ] [NetCAResponseFile.getVirtualHost:171] Virtual Host from response file: null
[main] [ 2015-04-03 15:39:06.799 MSK ] [SilentConfigure.performSilentConfigure:198] Typical profile configuration.
[main] [ 2015-04-03 15:39:06.801 MSK ] [ConfigureProfile.setProfileParam:140] Setting NAMES.DIRECTORY_PATH: (TNSNAMES, EZCONNECT)
[main] [ 2015-04-03 15:39:06.802 MSK ] [SilentConfigure.performSilentConfigure:206] Typical listener configuration.
[main] [ 2015-04-03 15:39:06.839 MSK ] [ConfigureListener.isHASConfigured:1596] Calling SRVM api to check if Oracle Restart is configured ...
[main] [ 2015-04-03 15:39:06.840 MSK ] [HAUtils.getCurrentOracleHome:593] Oracle home from system property: /u01/app/oracle/product/12.1.0/db_1
[main] [ 2015-04-03 15:39:06.840 MSK ] [HAUtils.getConfiguredGridHome:1343] ----- Getting CRS HOME ----
[main] [ 2015-04-03 15:39:06.840 MSK ] [UnixSystem.getCRSHome:2878] olrFileName = /etc/oracle/olr.loc
[main] [ 2015-04-03 15:39:06.841 MSK ] [HAUtils.getHASHome:1500] Failed to get HAS home.
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
[main] [ 2015-04-03 15:39:06.841 MSK ] [ConfigureListener.isHASConfigured:1607] Is Oracle Restart configured: false
[main] [ 2015-04-03 15:39:06.841 MSK ] [ConfigureListener.isHASRunning:1636] Is Oracle Restart running: false
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.listenerExists:396] Is listener "LISTENER" already exists: false
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.typicalConfigure:257] Checking for free port in range: 1521-1540
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.validateEndPoint:1059] Validating end-point: TCP:1521
[main] [ 2015-04-03 15:39:06.944 MSK ] [ConfigureListener.isPortFree:1131] Checking if port 1521 is free on local machine...
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1146] InetAddress.getByName(127.0.0.1): /127.0.0.1
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1148] Local host IP address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1150] Local host name: localhost.localdomain
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/127.0.0.1, Is IPv6 Address: false
[main] [ 2015-04-03 15:39:06.946 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/127.0.0.1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:06.946 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is IPv6 Address: true
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/0:0:0:0:0:0:0:1
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1209] Creating ServerSocket on Port:1521, Local IP Address: /127.0.0.1
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1213] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1219] Creating ServerSocket on Port:1521
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.isPortFree:1222] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.isPortFree:1242] Returning is Port 1521 free: true
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.validateEndPoint:1114] Validation...Complete for TCP/TCPS.
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.typicalConfigure:274] Using port: 1521
[main] [ 2015-04-03 15:39:08.684 MSK ] [ConfigureListener.isPortFree:1131] Checking if port 1521 is free on local machine...
[main] [ 2015-04-03 15:39:08.685 MSK ] [ConfigureListener.isPortFree:1146] InetAddress.getByName(127.0.0.1): /127.0.0.1
[main] [ 2015-04-03 15:39:08.686 MSK ] [ConfigureListener.isPortFree:1148] Local host IP address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:08.686 MSK ] [ConfigureListener.isPortFree:1150] Local host name: localhost.localdomain
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/127.0.0.1, Is IPv6 Address: false
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/127.0.0.1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:08.688 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.688 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is IPv6 Address: true
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/0:0:0:0:0:0:0:1
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.690 MSK ] [ConfigureListener.isPortFree:1209] Creating ServerSocket on Port:1521, Local IP Address: /127.0.0.1
[main] [ 2015-04-03 15:39:08.690 MSK ] [ConfigureListener.isPortFree:1213] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.691 MSK ] [ConfigureListener.isPortFree:1219] Creating ServerSocket on Port:1521
[main] [ 2015-04-03 15:39:08.691 MSK ] [ConfigureListener.isPortFree:1222] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.692 MSK ] [ConfigureListener.isPortFree:1242] Returning is Port 1521 free: true
Maybe problem is because:
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
Any ideas what I am dooing wrong and how finally install the Oracle?
I found the reason for this exception. If somebody will face the same problem just create /etc/oracle folder and give to it 777 permissions. For me it helped
I also got error "[INS-20802] Oracle Net Configuration Assistant failed" while installing Oracle 12c (12.2.0.1.4) on Centos7.
In my case the error went away after adding an entry in the /etc/hosts file with the hostname and its local network IP.
After that change the installation was able to finish successfully.
Resulting /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.100 centos100
777 is not the solution, it is making your system vulnerable. As suggested in oracle docs, the dir privileges should be 775.
For me in Windows 10 the solution was to install Microsoft Visual C++ 2010 Redistributable Package (x86)

Resources