Connection refused elasticsearch - elasticsearch

Trying to do a "curl http://localhost:9200" but getting "Failed connection refused" Firewalld is off and elasticsearch.yml file settings are set to default. Below is a portion of the yml file.
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/log/elasticsearch
#
# Path to log files:
#
path.logs: /var/data/elasticsearch
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
Below is a tail of the elasticsearch.log file:
[2018-03-29T07:06:02,094][INFO ][o.e.c.s.MasterService ] [TBin_UP] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}
[2018-03-29T07:06:02,105][INFO ][o.e.c.s.ClusterApplierService] [TBin_UP] new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}, reason: apply cluster state (from master [master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-03-29T07:06:02,148][INFO ][o.e.g.GatewayService ] [TBin_UP] recovered [0] indices into cluster_state
[2018-03-29T07:06:02,155][INFO ][o.e.h.n.Netty4HttpServerTransport] [TBin_UP] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-03-29T07:06:02,155][INFO ][o.e.n.Node ] [TBin_UP] started
[2018-03-29T07:06:02,445][INFO ][o.e.m.j.JvmGcMonitorService] [TBin_UP] [gc][14] overhead, spent [300ms] collecting in the last [1s]
[2018-03-29T07:14:50,259][INFO ][o.e.n.Node ] [TBin_UP] stopping ...
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] stopped
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] closing ...
[2018-03-29T07:14:50,620][INFO ][o.e.n.Node ] [TBin_UP] closed
Service status:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-29 08:05:46 EDT; 2min 38s ago
Docs: http://www.elastic.co
Process: 22384 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 22384 (code=exited, status=1/FAILURE)
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,668 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,670 main ERROR Unable to locate appender "rolling" for logger config "root"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logger config "index.indexing.slowlog.index"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger config "index.search.slowlog"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,672 main ERROR Unable to locate appender "deprecation_rolling" for logger config "org.elasticsearch.deprecation"
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Mar 29 08:05:46 satyr systemd[1]: Unit elasticsearch.service entered failed state.
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service failed.

Related

Elastic Search failed to start - invalid SSL configuration for xpack.security.transport.ssl

The OS version is CentOS Linux release 7.8.2003 (Core), I tried two rpms for installation and none worked properly: elasticsearch-1.7.3.noarch.rpm and elasticsearch-8.4.2-x86_64.rpm. For the latter, When running sudo /bin/systemctl start elasticsearch.service it cannot finish booting Elasticsearch, and the log shows "invalid SSL configuration for xpack.security.transport.ssl".
I checked here and here and cannot find the answer and need more help.
I don't know any (initial) password, the installation process did not prompt me any information.
Infor_1:
[root#ali01 elasticsearch]# pwd
/etc/elasticsearch
[root#ali01 elasticsearch]# ls -tl
total 64
-rw-rw---- 1 root elasticsearch 2969 Sep 27 10:48 elasticsearch.yml
-rw-rw---- 1 root elasticsearch 2635 Sep 27 10:23 jvm.options
-rw-rw---- 1 root elasticsearch 2637 Sep 26 17:55 jvm.options.rpmsave
-rw-rw---- 1 root elasticsearch 4303 Sep 26 17:53 elasticsearch.yml.rpmsave
-rw-rw---- 1 root elasticsearch 536 Sep 26 16:58 elasticsearch.keystore
drwxr-x--- 2 root elasticsearch 4096 Sep 26 16:58 certs
drwxr-s--- 2 root elasticsearch 4096 Sep 15 00:33 jvm.options.d
-rw-rw---- 1 root elasticsearch 1042 Sep 15 00:29 elasticsearch-plugins.example.yml
-rw-rw---- 1 root elasticsearch 17417 Sep 15 00:29 log4j2.properties
-rw-rw---- 1 root elasticsearch 473 Sep 15 00:29 role_mapping.yml
-rw-rw---- 1 root elasticsearch 197 Sep 15 00:29 roles.yml
-rw-rw---- 1 root elasticsearch 0 Sep 15 00:29 users
-rw-rw---- 1 root elasticsearch 0 Sep 15 00:29 users_roles
[root#ali01 elasticsearch]# /usr/share/elasticsearch/bin/elasticsearch --version
Version: 8.4.2, Build: rpm/89f8c6d8429db93b816403ee75e5c270b43a940a/2022-09-14T16:26:04.382547801Z, JVM: 18.0.2.1
[root#ali01 elasticsearch]# /usr/share/elasticsearch/bin/elasticsearch-keystore list
autoconfiguration.password_hash
keystore.seed
xpack.security.http.ssl.keystore.secure_password
xpack.security.transport.ssl.keystore.secure_password
xpack.security.transport.ssl.truststore.secure_password
Config_1 (elasticsearch.yml):
[root#ali01 elasticsearch]# cat elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 127.0.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
Config_2 (jvm.options):
[root#ali01 elasticsearch]# cat jvm.options
################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.4/jvm-options.html
## for more information.
##
################################################################
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## which should be named with .options suffix, and the min and
## max should be set to the same value. For example, to set the
## heap to 4 GB, create a new file in the jvm.options.d
## directory containing these lines:
##
## -Xms4g
## -Xmx4g
-Xms256m
-Xmx256m
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.4/heap-size.html
## for more information
##
################################################################
################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################
-XX:+UseG1GC
## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}
## heap dumps
# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError
# exit right after heap dump on out of memory error
-XX:+ExitOnOutOfMemoryError
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=/var/lib/elasticsearch
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log
## GC logging
-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
[root#ali01 elasticsearch]#
log (tail -f /var/log/elasticsearch/my-application.log):
[2022-09-27T10:49:34,001][INFO ][o.e.n.Node ] [node-1] version[8.4.2], pid[14086], build[rpm/89f8c6d8429db93b816403ee75e5c270b43a940a/2022-09-14T16:26:04.382547801Z], OS[Linux/3.10.0-1127.19.1.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/18.0.2.1/18.0.2.1+1-1]
[2022-09-27T10:49:34,037][INFO ][o.e.n.Node ] [node-1] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-09-27T10:49:34,037][INFO ][o.e.n.Node ] [node-1] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -Xms256m, -Xmx256m, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-10477436689482229078, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=134217728, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-09-27T10:49:39,206][INFO ][c.a.c.i.j.JacksonVersion ] [node-1] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-09-27T10:49:42,620][INFO ][o.e.p.PluginsService ] [node-1] loaded module [aggs-matrix-stats]
[2022-09-27T10:49:42,620][INFO ][o.e.p.PluginsService ] [node-1] loaded module [analysis-common]
[2022-09-27T10:49:42,621][INFO ][o.e.p.PluginsService ] [node-1] loaded module [constant-keyword]
[2022-09-27T10:49:42,621][INFO ][o.e.p.PluginsService ] [node-1] loaded module [data-streams]
[2022-09-27T10:49:42,621][INFO ][o.e.p.PluginsService ] [node-1] loaded module [frozen-indices]
[2022-09-27T10:49:42,621][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-attachment]
[2022-09-27T10:49:42,622][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-common]
[2022-09-27T10:49:42,623][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-geoip]
[2022-09-27T10:49:42,623][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-user-agent]
[2022-09-27T10:49:42,623][INFO ][o.e.p.PluginsService ] [node-1] loaded module [kibana]
[2022-09-27T10:49:42,624][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-expression]
[2022-09-27T10:49:42,624][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-mustache]
[2022-09-27T10:49:42,624][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-painless]
[2022-09-27T10:49:42,624][INFO ][o.e.p.PluginsService ] [node-1] loaded module [legacy-geo]
[2022-09-27T10:49:42,625][INFO ][o.e.p.PluginsService ] [node-1] loaded module [mapper-extras]
[2022-09-27T10:49:42,625][INFO ][o.e.p.PluginsService ] [node-1] loaded module [mapper-version]
[2022-09-27T10:49:42,625][INFO ][o.e.p.PluginsService ] [node-1] loaded module [old-lucene-versions]
[2022-09-27T10:49:42,625][INFO ][o.e.p.PluginsService ] [node-1] loaded module [parent-join]
[2022-09-27T10:49:42,626][INFO ][o.e.p.PluginsService ] [node-1] loaded module [percolator]
[2022-09-27T10:49:42,633][INFO ][o.e.p.PluginsService ] [node-1] loaded module [rank-eval]
[2022-09-27T10:49:42,634][INFO ][o.e.p.PluginsService ] [node-1] loaded module [reindex]
[2022-09-27T10:49:42,634][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repositories-metering-api]
[2022-09-27T10:49:42,634][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-azure]
[2022-09-27T10:49:42,634][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-encrypted]
[2022-09-27T10:49:42,635][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-gcs]
[2022-09-27T10:49:42,635][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-s3]
[2022-09-27T10:49:42,635][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-url]
[2022-09-27T10:49:42,635][INFO ][o.e.p.PluginsService ] [node-1] loaded module [runtime-fields-common]
[2022-09-27T10:49:42,635][INFO ][o.e.p.PluginsService ] [node-1] loaded module [search-business-rules]
[2022-09-27T10:49:42,636][INFO ][o.e.p.PluginsService ] [node-1] loaded module [searchable-snapshots]
[2022-09-27T10:49:42,638][INFO ][o.e.p.PluginsService ] [node-1] loaded module [snapshot-based-recoveries]
[2022-09-27T10:49:42,638][INFO ][o.e.p.PluginsService ] [node-1] loaded module [snapshot-repo-test-kit]
[2022-09-27T10:49:42,638][INFO ][o.e.p.PluginsService ] [node-1] loaded module [spatial]
[2022-09-27T10:49:42,638][INFO ][o.e.p.PluginsService ] [node-1] loaded module [systemd]
[2022-09-27T10:49:42,639][INFO ][o.e.p.PluginsService ] [node-1] loaded module [transform]
[2022-09-27T10:49:42,639][INFO ][o.e.p.PluginsService ] [node-1] loaded module [transport-netty4]
[2022-09-27T10:49:42,639][INFO ][o.e.p.PluginsService ] [node-1] loaded module [unsigned-long]
[2022-09-27T10:49:42,639][INFO ][o.e.p.PluginsService ] [node-1] loaded module [vector-tile]
[2022-09-27T10:49:42,639][INFO ][o.e.p.PluginsService ] [node-1] loaded module [wildcard]
[2022-09-27T10:49:42,639][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-aggregate-metric]
[2022-09-27T10:49:42,640][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-analytics]
[2022-09-27T10:49:42,645][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-async]
[2022-09-27T10:49:42,645][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-async-search]
[2022-09-27T10:49:42,646][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-autoscaling]
[2022-09-27T10:49:42,646][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-ccr]
[2022-09-27T10:49:42,646][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-core]
[2022-09-27T10:49:42,646][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-deprecation]
[2022-09-27T10:49:42,646][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-enrich]
[2022-09-27T10:49:42,646][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-eql]
[2022-09-27T10:49:42,646][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-fleet]
[2022-09-27T10:49:42,647][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-graph]
[2022-09-27T10:49:42,647][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-identity-provider]
[2022-09-27T10:49:42,649][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-ilm]
[2022-09-27T10:49:42,649][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-logstash]
[2022-09-27T10:49:42,649][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-ml]
[2022-09-27T10:49:42,649][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-monitoring]
[2022-09-27T10:49:42,650][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-ql]
[2022-09-27T10:49:42,650][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-rollup]
[2022-09-27T10:49:42,650][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-security]
[2022-09-27T10:49:42,650][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-shutdown]
[2022-09-27T10:49:42,650][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-sql]
[2022-09-27T10:49:42,651][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-stack]
[2022-09-27T10:49:42,651][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-text-structure]
[2022-09-27T10:49:42,655][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-voting-only-node]
[2022-09-27T10:49:42,655][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-watcher]
[2022-09-27T10:49:42,656][INFO ][o.e.p.PluginsService ] [node-1] no plugins loaded
[2022-09-27T10:49:50,310][INFO ][o.e.e.NodeEnvironment ] [node-1] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [3.1gb], net total_space [19.5gb], types [rootfs]
[2022-09-27T10:49:50,314][INFO ][o.e.e.NodeEnvironment ] [node-1] heap size [256mb], compressed ordinary object pointers [true]
[2022-09-27T10:49:50,331][INFO ][o.e.n.Node ] [node-1] node name [node-1], node ID [WHujxIoTQVCOHA2NuQKXqg], cluster name [my-application], roles [data_frozen, ingest, data_cold, data, remote_cluster_client, master, data_warm, data_content, transform, data_hot, ml]
[2022-09-27T10:49:56,448][ERROR][o.e.b.Elasticsearch ] [node-1] fatal exception while booting Elasticsearch
org.elasticsearch.ElasticsearchSecurityException: invalid SSL configuration for xpack.security.transport.ssl - server ssl configuration requires a key and certificate, but these have not been configured; you must set either [xpack.security.transport.ssl.keystore.path], or both [xpack.security.transport.ssl.key] and [xpack.security.transport.ssl.certificate]
at org.elasticsearch.xpack.core.ssl.SSLService.validateServerConfiguration(SSLService.java:635) ~[?:?]
at org.elasticsearch.xpack.core.ssl.SSLService.loadSslConfigurations(SSLService.java:612) ~[?:?]
at org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:156) ~[?:?]
at org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:463) ~[?:?]
at org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:312) ~[?:?]
at org.elasticsearch.node.Node.lambda$new$15(Node.java:696) ~[elasticsearch-8.4.2.jar:?]
at org.elasticsearch.plugins.PluginsService.lambda$flatMap$0(PluginsService.java:236) ~[elasticsearch-8.4.2.jar:?]
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273) ~[?:?]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:720) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) ~[?:?]
at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) ~[?:?]
at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616) ~[?:?]
at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622) ~[?:?]
at java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627) ~[?:?]
at org.elasticsearch.node.Node.<init>(Node.java:710) ~[elasticsearch-8.4.2.jar:?]
at org.elasticsearch.node.Node.<init>(Node.java:311) ~[elasticsearch-8.4.2.jar:?]
at org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214) ~[elasticsearch-8.4.2.jar:?]
at org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214) ~[elasticsearch-8.4.2.jar:?]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67) ~[elasticsearch-8.4.2.jar:?]
Refer to the setup basic security and secure HTTP steps and generate transport key+cert and HTTP key+cert.
Then run command openssl x509 -req -in httpCert.csr -signkey httpCert.key -out httpCert.crt to generate the HTTP cert file. and put them in the directory /etc/elasticsearch/certs/ and /etc/elasticsearch/certs/httpCert respectively.
Then config in /etc/elasticsearch.yml:
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.certificate_authorities: certs/elastic-stack-ca.pem
xpack.security.transport.ssl.truststore.type: PKCS12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: certs/httpCert/httpCert.key
xpack.security.http.ssl.certificate: certs/httpCert/httpCert.crt
Start/restart elasticsearch and it is up: systemctl start elasticsearch.service.
[root#ecs-140825 elasticsearch]# curl -X GET "https://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s&pretty" --cacert /etc/elasticsearch/certs/httpCert/httpCert.crt -k -u elastic
Enter host password for user 'elastic':
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 1,
"active_shards" : 1,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

Clickhouse starts with error: Cannot get pipe capacity

After installing clickhouse using apt-get, I try to start it
sudo -u clickhouse clickhouse-server --config-file=/etc/clickhouse-server/config.xml
but it doesn't start with an error
Application: DB::ErrnoException: Cannot get pipe capacity, errno: 22, strerror: Invalid argument
full log:
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Logging trace to console
2019.08.28 11:26:50.255115 [ 1 ] {} <Information> : Starting ClickHouse 19.13.3.26 with revision 54425
2019.08.28 11:26:50.255253 [ 1 ] {} <Information> Application: starting up
2019.08.28 11:26:50.260659 [ 1 ] {} <Debug> Application: Set max number of file descriptors to 1048576 (was 1024).
2019.08.28 11:26:50.260715 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2019.08.28 11:26:50.260733 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'America/New_York'.
2019.08.28 11:26:50.261086 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'virtual.rysev' as replica host.
2019.08.28 11:26:50.264129 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
Include not found: networks
2019.08.28 11:26:50.265577 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 512.00 MiB because the system has low amount of memory
2019.08.28 11:26:50.265908 [ 1 ] {} <Information> Application: Mark cache size was lowered to 512.00 MiB because the system has low amount of memory
2019.08.28 11:26:50.265955 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2019.08.28 11:26:50.267614 [ 1 ] {} <Debug> Application: Loaded metadata.
2019.08.28 11:26:50.267981 [ 1 ] {} <Information> Application: Shutting down storages.
2019.08.28 11:26:50.268287 [ 1 ] {} <Debug> Application: Shutted down storages.
2019.08.28 11:26:50.269839 [ 1 ] {} <Debug> Application: Destroyed global context.
2019.08.28 11:26:50.270149 [ 1 ] {} <Error> Application: DB::ErrnoException: Cannot get pipe capacity, errno: 22, strerror: Invalid argument
2019.08.28 11:26:50.270181 [ 1 ] {} <Information> Application: shutting down
2019.08.28 11:26:50.270194 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2019.08.28 11:26:50.270265 [ 3 ] {} <Information> BaseDaemon: Stop SignalListener thread
pls, help
you could try run following commands:
systemctl enable clickhouse-server
systemctl start clickhouse-server
also you can try figure out how properly run clickhouse-server
less /lib/systemd/system/clickhouse-server.service
I had the same error message when installing clickhouse on a CentOS 7 docker image which was started on a CentOS 6 host (kernel-2.6.32).
The issue seems to be related to a change in clickhouse 19.12:
TraceCollector.cpp:52
int pipe_size = fcntl(trace_pipe.fds_rw[1], F_GETPIPE_SZ);
F_GETPIPE_SZ is available in Linux kernel >=2.6.35, so this might be the issue.
So you can manually install an older version of clickhouse through the repository or update your kernel.
rpm: https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/
deb: https://repo.yandex.ru/clickhouse/deb/stable/main/
I tried clickhouse version 19.11.8.46-2 and it instantly worked.

Elasticserach not creating the Indice for new pipeline via logstash

I have set-up a ELK but I see elasticsearch not creating the Index and unable to upload the data, Service Elasticsearch and Logstash both are running..
Below is the details.. However I do not see anything on he logs.
Elastic config:
[root#aruba-elk2 rm_logs]# cat /etc/elasticsearch/elasticsearch.yml
# Elasticserach config
#########################
cluster.name: log-cohort-test
node.name: aruba-elk2
node.master: true
path:
data: /elk/lib/elasticsearch
logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
bootstrap.system_call_filter: False
[root#aruba-elk2 rm_logs]#
[root#aruba-elk2 rm_logs]#
LOGSTASH COnfig:
[root#aruba-elk2 rm_logs]# cat /etc/logstash/logstash.yml
path.data: /var/lib/logstash
path.logs: /var/log/logstash
[root#aruba-elk2 rm_logs]# cat /etc/logstash/conf.d/logstash-syslog.conf
input {
file {
path => [ "/elk/rm_logs/*.txt" ]
type => "rmlog"
}
}
filter {
if [type] == "rmlog" {
grok {
match => { "message" => "%{HOSTNAME:hostname},%{DATE:date},%{HOUR:hour1}:%{MINUTE:minute1},%{NUMBER}-%{WORD},%{USER:user},%{USER:user2} %{NUMBER:pid} %{NUMBER:float} %{NUMBER:float} %{NUMBER:number1} %{NUMBER:number2} %{DATA} %{HOUR:hour2}:%{MINUTE:minute2} %{HOUR:hour3}:%{MINUTE:minute3} %{GREEDYDATA:command},%{PATH:path}" }
add_field => [ "received_at", "%{#timestamp}" ]
}
}
}
output {
if [type] == "rmlog" {
elasticsearch {
hosts => ["aruba-elk2:9200"]
manage_template => false
index => "rmlog-%{+YYYY.MM.dd}"
#document_type => "messages"
}
}
}
Input data Source:
[root#aruba-elk2 rm_logs]# cd /elk/rm_logs/
[root#aruba-elk2 rm_logs]# ls -ltrh | head
total 2.6M
-rw-r--r-- 1 root root 558 Jan 11 11:27 dbxchw092.txt
-rw-r--r-- 1 root root 405 Jan 11 11:27 dbxtx220.txt
-rw-r--r-- 1 root root 241 Jan 11 11:27 dbxcvm139.txt
-rw-r--r-- 1 root root 455 Jan 11 11:27 dbxcnl038.txt
-rw-r--r-- 1 root root 230 Jan 11 11:27 dbxchw052.txt
-rw-r--r-- 1 root root 143 Jan 11 11:27 dbxtx222.txt
-rw-r--r-- 1 root root 577 Jan 11 11:27 dbxtx224.txt
-rw-r--r-- 1 root root 274 Jan 11 11:27 dbxcvm082.txt
-rw-r--r-- 1 root root 281 Jan 11 11:27 dbxcsb003.txt
Sample of above data file:
testhost-in2,19/01/11,06:34,04-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:00 rm -rf /test/ehf/users/arnav-090119-184844,/dv/ehf/users/arnav-090119-
testhost-in2,19/01/11,06:40,09-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:00 rm -rf /dv/ehf/users/arnav-090119-184844,/dv/ehf/users/arnav-090119-\
testhost-in2,19/01/11,06:45,14-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:01 rm -rf /
LOGS:
Logstash logs:
[root#aruba-elk2 logstash]# cat logstash-plain.log
[2019-01-12T23:48:31,653][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2019-01-12T23:48:34,959][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-01-12T23:48:35,374][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://aruba-elk2:9200/]}}
[2019-01-12T23:48:35,588][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://aruba-elk2:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://aruba-elk2:9200/][Manticore::SocketException] Connection refused"}
[2019-01-12T23:48:35,608][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//aruba-elk2:9200"]}
[2019-01-12T23:48:36,063][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_076330d5fd2c2b811bc1960a3d0547be", :path=>["/elk/rm_logs/*.txt"]}
[2019-01-12T23:48:36,095][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x424bb675 run>"}
[2019-01-12T23:48:36,155][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-01-12T23:48:36,156][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-12T23:48:36,542][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-12T23:48:40,796][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://aruba-elk2:9200/"}
[2019-01-12T23:48:40,855][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-01-12T23:48:40,859][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Elasticsearch LOGS:
[root#aruba-elk2 elasticsearch]# cat gc.log.0.current| tail
2019-01-13T00:13:29.280+0530: 1237.781: Total time for which application threads were stopped: 0.0002681 seconds, Stopping threads took: 0.0000316 seconds
2019-01-13T00:13:31.281+0530: 1239.782: Total time for which application threads were stopped: 0.0003670 seconds, Stopping threads took: 0.0000586 seconds
2019-01-13T00:13:32.281+0530: 1240.782: Total time for which application threads were stopped: 0.0003134 seconds, Stopping threads took: 0.0000708 seconds
2019-01-13T00:13:37.282+0530: 1245.783: Total time for which application threads were stopped: 0.0004663 seconds, Stopping threads took: 0.0001315 seconds
2019-01-13T00:13:51.284+0530: 1259.785: Total time for which application threads were stopped: 0.0004230 seconds, Stopping threads took: 0.0000691 seconds
2019-01-13T00:13:57.286+0530: 1265.787: Total time for which application threads were stopped: 0.0008421 seconds, Stopping threads took: 0.0002697 seconds
2019-01-13T00:13:58.287+0530: 1266.787: Total time for which application threads were stopped: 0.0004467 seconds, Stopping threads took: 0.0000706 seconds
2019-01-13T00:14:11.288+0530: 1279.789: Total time for which application threads were stopped: 0.0004702 seconds, Stopping threads took: 0.0001105 seconds
2019-01-13T00:14:18.289+0530: 1286.790: Total time for which application threads were stopped: 0.0004123 seconds, Stopping threads took: 0.0000750 seconds
Any help will be appreciated..

ins-20802 - oracle net configuration assistant failed during installation - centos 7

Hello I am trying to folow the manual for installing the Oracle 12c. Actually it was already installed on the machine, and then deinstalled.
During installiation I get the "[ins-20802] oracle net configuration assistant failed during installation" error window. And proposed detail log file, where I can see:
INFO: ... GenericInternalPlugIn: starting read loop.
INFO: Read:
WARNING: Skipping line:
INFO: End of argument passing to stdin
INFO: Read: Parsing command line arguments:
WARNING: Skipping line: Parsing command line arguments:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "orahome" = /u01/app/oracle/product/12.1.0/db_1
WARNING: Skipping line: Parameter "orahome" = /u01/app/oracle/product/12.1.0/db_1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "orahnam" = OraDB12Home1
WARNING: Skipping line: Parameter "orahnam" = OraDB12Home1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "instype" = typical
WARNING: Skipping line: Parameter "instype" = typical
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "inscomp" = client,oraclenet,javavm,server,ano
WARNING: Skipping line: Parameter "inscomp" = client,oraclenet,javavm,server,ano
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "insprtcl" = tcp
WARNING: Skipping line: Parameter "insprtcl" = tcp
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "cfg" = local
WARNING: Skipping line: Parameter "cfg" = local
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "authadp" = NO_VALUE
WARNING: Skipping line: Parameter "authadp" = NO_VALUE
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "responsefile" = /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
WARNING: Skipping line: Parameter "responsefile" = /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "silent" = true
WARNING: Skipping line: Parameter "silent" = true
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "ouiinternal" = true
WARNING: Skipping line: Parameter "ouiinternal" = true
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Done parsing command line arguments.
WARNING: Skipping line: Done parsing command line arguments.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Services Configuration:
WARNING: Skipping line: Oracle Net Services Configuration:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Profile configuration complete.
WARNING: Skipping line: Profile configuration complete.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Listener Startup:
WARNING: Skipping line: Oracle Net Listener Startup:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Running Listener Control:
WARNING: Skipping line: Running Listener Control:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: /u01/app/oracle/product/12.1.0/db_1/bin/lsnrctl start LISTENER
WARNING: Skipping line: /u01/app/oracle/product/12.1.0/db_1/bin/lsnrctl start LISTENER
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Listener Control complete.
WARNING: Skipping line: Listener Control complete.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Listener start failed.
WARNING: Skipping line: Listener start failed.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Check the trace file for details: /u01/app/oracle/cfgtoollogs/netca/trace_OraDB12Home1-1504033PM3901.log
WARNING: Skipping line: Check the trace file for details: /u01/app/oracle/cfgtoollogs/netca/trace_OraDB12Home1-1504033PM3901.log
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Services configuration failed. The exit code is 1
WARNING: Skipping line: Oracle Net Services configuration failed. The exit code is 1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Completed Plugin named: Oracle Net Configuration Assistant
Ans the corresponding trace_OraDB12Home1-1504033PM3901.log:
[main] [ 2015-04-03 15:39:06.329 MSK ] [OracleHome.getVersion:1059] Current Version From Inventory: 12.1.0.2.0
[main] [ 2015-04-03 15:39:06.329 MSK ] [InitialSetup.<init>:4151] Admin location is: /u01/app/oracle/product/12.1.0/db_1/network/admin
[main] [ 2015-04-03 15:39:06.718 MSK ] [ConfigureProfile.setProfileParam:140] Setting NAMES.DIRECTORY_PATH: (TNSNAMES, EZCONNECT)
[main] [ 2015-04-03 15:39:06.735 MSK ] [HAUtils.getCurrentOracleHome:593] Oracle home from system property: /u01/app/oracle/product/12.1.0/db_1
[main] [ 2015-04-03 15:39:06.735 MSK ] [HAUtils.getConfiguredGridHome:1343] ----- Getting CRS HOME ----
[main] [ 2015-04-03 15:39:06.737 MSK ] [UnixSystem.getCRSHome:2878] olrFileName = /etc/oracle/olr.loc
[main] [ 2015-04-03 15:39:06.795 MSK ] [HAUtils.getHASHome:1500] Failed to get HAS home.
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
[main] [ 2015-04-03 15:39:06.795 MSK ] [InitialSetup.checkHAConfiguration:4808] HA Server is NOT configured.
[main] [ 2015-04-03 15:39:06.797 MSK ] [NetCAResponseFile.<init>:75] Response file initialized: /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
[main] [ 2015-04-03 15:39:06.798 MSK ] [NetCAResponseFile.getInstalledComponents:114] Installed components from response file: server, net8, javavm
[main] [ 2015-04-03 15:39:06.798 MSK ] [NetCAResponseFile.getVirtualHost:171] Virtual Host from response file: null
[main] [ 2015-04-03 15:39:06.799 MSK ] [SilentConfigure.performSilentConfigure:198] Typical profile configuration.
[main] [ 2015-04-03 15:39:06.801 MSK ] [ConfigureProfile.setProfileParam:140] Setting NAMES.DIRECTORY_PATH: (TNSNAMES, EZCONNECT)
[main] [ 2015-04-03 15:39:06.802 MSK ] [SilentConfigure.performSilentConfigure:206] Typical listener configuration.
[main] [ 2015-04-03 15:39:06.839 MSK ] [ConfigureListener.isHASConfigured:1596] Calling SRVM api to check if Oracle Restart is configured ...
[main] [ 2015-04-03 15:39:06.840 MSK ] [HAUtils.getCurrentOracleHome:593] Oracle home from system property: /u01/app/oracle/product/12.1.0/db_1
[main] [ 2015-04-03 15:39:06.840 MSK ] [HAUtils.getConfiguredGridHome:1343] ----- Getting CRS HOME ----
[main] [ 2015-04-03 15:39:06.840 MSK ] [UnixSystem.getCRSHome:2878] olrFileName = /etc/oracle/olr.loc
[main] [ 2015-04-03 15:39:06.841 MSK ] [HAUtils.getHASHome:1500] Failed to get HAS home.
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
[main] [ 2015-04-03 15:39:06.841 MSK ] [ConfigureListener.isHASConfigured:1607] Is Oracle Restart configured: false
[main] [ 2015-04-03 15:39:06.841 MSK ] [ConfigureListener.isHASRunning:1636] Is Oracle Restart running: false
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.listenerExists:396] Is listener "LISTENER" already exists: false
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.typicalConfigure:257] Checking for free port in range: 1521-1540
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.validateEndPoint:1059] Validating end-point: TCP:1521
[main] [ 2015-04-03 15:39:06.944 MSK ] [ConfigureListener.isPortFree:1131] Checking if port 1521 is free on local machine...
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1146] InetAddress.getByName(127.0.0.1): /127.0.0.1
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1148] Local host IP address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1150] Local host name: localhost.localdomain
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/127.0.0.1, Is IPv6 Address: false
[main] [ 2015-04-03 15:39:06.946 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/127.0.0.1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:06.946 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is IPv6 Address: true
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/0:0:0:0:0:0:0:1
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1209] Creating ServerSocket on Port:1521, Local IP Address: /127.0.0.1
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1213] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1219] Creating ServerSocket on Port:1521
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.isPortFree:1222] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.isPortFree:1242] Returning is Port 1521 free: true
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.validateEndPoint:1114] Validation...Complete for TCP/TCPS.
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.typicalConfigure:274] Using port: 1521
[main] [ 2015-04-03 15:39:08.684 MSK ] [ConfigureListener.isPortFree:1131] Checking if port 1521 is free on local machine...
[main] [ 2015-04-03 15:39:08.685 MSK ] [ConfigureListener.isPortFree:1146] InetAddress.getByName(127.0.0.1): /127.0.0.1
[main] [ 2015-04-03 15:39:08.686 MSK ] [ConfigureListener.isPortFree:1148] Local host IP address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:08.686 MSK ] [ConfigureListener.isPortFree:1150] Local host name: localhost.localdomain
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/127.0.0.1, Is IPv6 Address: false
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/127.0.0.1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:08.688 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.688 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is IPv6 Address: true
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/0:0:0:0:0:0:0:1
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.690 MSK ] [ConfigureListener.isPortFree:1209] Creating ServerSocket on Port:1521, Local IP Address: /127.0.0.1
[main] [ 2015-04-03 15:39:08.690 MSK ] [ConfigureListener.isPortFree:1213] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.691 MSK ] [ConfigureListener.isPortFree:1219] Creating ServerSocket on Port:1521
[main] [ 2015-04-03 15:39:08.691 MSK ] [ConfigureListener.isPortFree:1222] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.692 MSK ] [ConfigureListener.isPortFree:1242] Returning is Port 1521 free: true
Maybe problem is because:
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
Any ideas what I am dooing wrong and how finally install the Oracle?
I found the reason for this exception. If somebody will face the same problem just create /etc/oracle folder and give to it 777 permissions. For me it helped
I also got error "[INS-20802] Oracle Net Configuration Assistant failed" while installing Oracle 12c (12.2.0.1.4) on Centos7.
In my case the error went away after adding an entry in the /etc/hosts file with the hostname and its local network IP.
After that change the installation was able to finish successfully.
Resulting /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.100 centos100
777 is not the solution, it is making your system vulnerable. As suggested in oracle docs, the dir privileges should be 775.
For me in Windows 10 the solution was to install Microsoft Visual C++ 2010 Redistributable Package (x86)

Unable to deploy to standalone oc4j

I am trying to deploy an ear to oc4j stand alone server. I think i have deployed it sucessfully because the operation completed sucessfully without any warning.
Now to test I guess I have to bind this web app to some website. When I tried to bind , I am getting the following error:
C:\oracle\JDEV2\j2ee\home>java -jar admin.jar ormi://127.0.0.1:22667 oc4jadmin w
elcome -bindWebApp appr_ear appr default-web-site appr
Jul 24, 2012 10:45:31 AM oracle.j2ee.rmi.RMIMessages EXCEPTION_ORIGINATES_FROM_T
HE_REMOTE_SERVER
WARNING: Exception returned by remote server: {0}
java.lang.ExceptionInInitializerError
at org.apache.tiles.factory.TilesContainerFactory.createTilesContainer(T
ilesContainerFactory.java:197)
at org.apache.tiles.factory.TilesContainerFactory.createContainer(TilesC
ontainerFactory.java:163)
at org.apache.tiles.web.startup.TilesListener.createContainer(TilesListe
ner.java:90)
at org.apache.struts2.tiles.StrutsTilesListener.createContainer(StrutsTi
lesListener.java:66)
at org.apache.tiles.web.startup.TilesListener.contextInitialized(TilesLi
stener.java:57)
at com.evermind.server.http.HttpApplication.initDynamic(HttpApplication.
java:1140)
at com.evermind.server.http.HttpApplication.<init>(HttpApplication.java:
741)
at com.evermind.server.ApplicationStateRunning.getHttpApplication(Applic
ationStateRunning.java:431)
at com.evermind.server.Application.getHttpApplication(Application.java:5
86)
at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.cre
ateHttpApplicationFromReference(HttpSite.java:1987)
at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.<in
it>(HttpSite.java:1906)
at com.evermind.server.http.HttpSite.addHttpApplication(HttpSite.java:16
03)
at oracle.oc4j.admin.internal.WebApplicationBinder.bindWebApp(WebApplica
tionBinder.java:302)
at com.evermind.server.administration.DefaultApplicationServerAdministra
tor.bindWebApp(DefaultApplicationServerAdministrator.java:424)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.rmi.RmiMethodCall.run(RmiMethodCall.java:53)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(Relea
sableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.comm
ons.logging.LogConfigurationException: No suitable Log constructor [Ljava.lang.C
lass;#72e694 for org.apache.commons.logging.impl.Log4JLogger (Caused by java.lan
g.NoClassDefFoundError: org/apache/log4j/Category) (Caused by org.apache.commons
.logging.LogConfigurationException: No suitable Log constructor [Ljava.lang.Clas
s;#72e694 for org.apache.commons.logging.impl.Log4JLogger (Caused by java.lang.N
oClassDefFoundError: org/apache/log4j/Category))
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactory
Impl.java:543)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactory
Impl.java:235)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactory
Impl.java:209)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:351)
at org.apache.tiles.impl.BasicTilesContainer.<clinit>(BasicTilesContaine
r.java:78)
... 21 more
Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log
constructor [Ljava.lang.Class;#72e694 for org.apache.commons.logging.impl.Log4J
Logger (Caused by java.lang.NoClassDefFoundError: org/apache/log4j/Category)
at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogF
actoryImpl.java:413)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactory
Impl.java:529)
... 25 more
Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Category
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2328)
at java.lang.Class.getConstructor0(Class.java:2640)
at java.lang.Class.getConstructor(Class.java:1629)
at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogF
actoryImpl.java:410)
... 26 more
Error: null
Before executing this command , i tried to deploy as follows:
C:\oracle\JDEV2\j2ee\home>java -jar admin.jar ormi://127.0.0.1:22667 oc4jadmin w
elcome -deploy -file C:\RTTTLea\dist\appr.ear -deploymentName appr_ear
And it looks the deployment was successful.
[ 2012-07-24 10:49:40.170 EDT ] Application Deployer for appr_ear STARTS.
[ 2012-07-24 10:49:40.170 EDT ] Stopping application : appr_ear
[ 2012-07-24 10:49:40.201 EDT ] Stopped application : appr_ear
[ 2012-07-24 10:49:40.201 EDT ] Undeploy previous deployment
[ 2012-07-24 10:49:40.935 EDT ] Initialize C:\oracle\JDEV2\jdev\extensions\oracl
e.adfp.seededoc4j.10.1.3\j2ee\home\applications\appr_ear.ear begins...
[ 2012-07-24 10:49:44.326 EDT ] Initialize C:\oracle\JDEV2\jdev\extensions\oracl
e.adfp.seededoc4j.10.1.3\j2ee\home\applications\appr_ear.ear ends...
[ 2012-07-24 10:49:44.326 EDT ] Starting application : appr_ear
[ 2012-07-24 10:49:44.326 EDT ] Initializing ClassLoader(s)
[ 2012-07-24 10:49:44.326 EDT ] Initializing EJB container
[ 2012-07-24 10:49:44.326 EDT ] Loading connector(s)
[ 2012-07-24 10:49:44.342 EDT ] Starting up resource adapters
[ 2012-07-24 10:49:44.342 EDT ] Initializing EJB sessions
[ 2012-07-24 10:49:44.342 EDT ] Committing ClassLoader(s)
[ 2012-07-24 10:49:44.342 EDT ] Initialize appr begins...
[ 2012-07-24 10:49:44.357 EDT ] Initialize appr ends...
[ 2012-07-24 10:49:44.357 EDT ] Started application : appr_ear
[ 2012-07-24 10:49:44.357 EDT ] Application Deployer for appr_ear COMPLETES. Ope
ration time: 4187 msecs
Also after deploying , I can see my application in the list of applications in View -> Connections -> Application Server , which means the deployment is successful. Now I want to open the homepage to verify.
java.lang.NoClassDefFoundError: org/apache/log4j/Category seems to indicate that your EAR file is missing the jar file for log4j or that a wrong version of log4j is in your server.

Resources