Clickhouse starts with error: Cannot get pipe capacity - clickhouse

After installing clickhouse using apt-get, I try to start it
sudo -u clickhouse clickhouse-server --config-file=/etc/clickhouse-server/config.xml
but it doesn't start with an error
Application: DB::ErrnoException: Cannot get pipe capacity, errno: 22, strerror: Invalid argument
full log:
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Logging trace to console
2019.08.28 11:26:50.255115 [ 1 ] {} <Information> : Starting ClickHouse 19.13.3.26 with revision 54425
2019.08.28 11:26:50.255253 [ 1 ] {} <Information> Application: starting up
2019.08.28 11:26:50.260659 [ 1 ] {} <Debug> Application: Set max number of file descriptors to 1048576 (was 1024).
2019.08.28 11:26:50.260715 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2019.08.28 11:26:50.260733 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'America/New_York'.
2019.08.28 11:26:50.261086 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'virtual.rysev' as replica host.
2019.08.28 11:26:50.264129 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
Include not found: networks
2019.08.28 11:26:50.265577 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 512.00 MiB because the system has low amount of memory
2019.08.28 11:26:50.265908 [ 1 ] {} <Information> Application: Mark cache size was lowered to 512.00 MiB because the system has low amount of memory
2019.08.28 11:26:50.265955 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2019.08.28 11:26:50.267614 [ 1 ] {} <Debug> Application: Loaded metadata.
2019.08.28 11:26:50.267981 [ 1 ] {} <Information> Application: Shutting down storages.
2019.08.28 11:26:50.268287 [ 1 ] {} <Debug> Application: Shutted down storages.
2019.08.28 11:26:50.269839 [ 1 ] {} <Debug> Application: Destroyed global context.
2019.08.28 11:26:50.270149 [ 1 ] {} <Error> Application: DB::ErrnoException: Cannot get pipe capacity, errno: 22, strerror: Invalid argument
2019.08.28 11:26:50.270181 [ 1 ] {} <Information> Application: shutting down
2019.08.28 11:26:50.270194 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2019.08.28 11:26:50.270265 [ 3 ] {} <Information> BaseDaemon: Stop SignalListener thread
pls, help

you could try run following commands:
systemctl enable clickhouse-server
systemctl start clickhouse-server
also you can try figure out how properly run clickhouse-server
less /lib/systemd/system/clickhouse-server.service

I had the same error message when installing clickhouse on a CentOS 7 docker image which was started on a CentOS 6 host (kernel-2.6.32).
The issue seems to be related to a change in clickhouse 19.12:
TraceCollector.cpp:52
int pipe_size = fcntl(trace_pipe.fds_rw[1], F_GETPIPE_SZ);
F_GETPIPE_SZ is available in Linux kernel >=2.6.35, so this might be the issue.
So you can manually install an older version of clickhouse through the repository or update your kernel.
rpm: https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/
deb: https://repo.yandex.ru/clickhouse/deb/stable/main/
I tried clickhouse version 19.11.8.46-2 and it instantly worked.

Related

Not able to use bind-mount volumes with Elasticsearch used in a podman container

I'm new at Elasticsearch (ES) and I'm currently set a customized podman container ES 8.5.0 installation (rootless install) from ES base RPM repository
In this installation I'm using a dedicated Linux user 'elasticadm' which owns the files into the container and over the local Red Hat Linux 8.5 host
Basically I use the following ownership for the installation on localhost:
/app/elasticsearch/data - /var/log/elasticsearch/elasticsearch.log - /etc/elasticsearch/elasticsearch.yml:
elasticadm: elasticsearch - then after the below error occurred I tried: elasticadm:root (but with no more success)
Whenever I run a Elasticsearch podman container with any mount-bind volumes the installation fails with the following error message
"
Fatal exception while booting Elasticsearch org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml
"
ES podman installation without mount-bind volumes is fine but has no interest of course
I'm able to deploy the container without any bind-mount volumes.
podman run --detach --name es850 --publish 9200:9200 --user=elasticadm localhost/elasticsearch_cust:1.4
podman logs es850
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
[2022-11-09T20:37:41,777][INFO ][o.e.n.Node ] [Prod] version[8.5.0], pid[72], build[rpm/c94b4700cda13820dad5aa74fae6db185ca5c304/2022-10-24T16:54:16.433628434Z], OS[Linux/4.18.0-348.7.1.el8_5.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/19/19+36-2238]
[2022-11-09T20:37:41,782][INFO ][o.e.n.Node ] [Prod] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-11-09T20:37:41,783][INFO ][o.e.n.Node ] [Prod] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-5358173424819503746, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1868m, -Xmx1868m, -XX:MaxDirectMemorySize=979369984, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-11-09T20:37:43,721][INFO ][c.a.c.i.j.JacksonVersion ] [Prod] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-11-09T20:37:45,175][INFO ][o.e.p.PluginsService ] [Prod] loaded module [aggs-matrix-stats]
[2022-11-09T20:37:45,175][INFO ][o.e.p.PluginsService ] [Prod] loaded module [analysis-common]
[2022-11-09T20:37:45,176][INFO ][o.e.p.PluginsService ] [Prod] loaded module [apm]
......
[2022-11-09T20:37:45,190][INFO ][o.e.p.PluginsService ] [Prod] loaded module [x-pack-watcher]
[2022-11-09T20:37:45,191][INFO ][o.e.p.PluginsService ] [Prod] no plugins loaded
[2022-11-09T20:37:48,027][WARN ][stderr ] [Prod] Nov 09, 2022 8:37:48 PM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-09T20:37:48,028][WARN ][stderr ] [Prod] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-09T20:37:48,039][INFO ][o.e.e.NodeEnvironment ] [Prod] using [1] data paths, mounts [[/ (overlay)]], net usable_space [24gb], net total_space [27.8gb], types [overlay]
[2022-11-09T20:37:48,039][INFO ][o.e.e.NodeEnvironment ] [Prod] heap size [1.8gb], compressed ordinary object pointers [true]
[2022-11-09T20:37:48,048][INFO ][o.e.n.Node ] [Prod] node name [Prod], node ID [CvroQFRsTxKqyWfwcOJGag], cluster name [elasticsearch], roles [data_frozen, ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest]
[2022-11-09T20:37:51,831][INFO ][o.e.x.s.Security ] [Prod] Security is enabled
[2022-11-09T20:37:52,214][INFO ][o.e.x.s.a.s.FileRolesStore] [Prod] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2022-11-09T20:37:52,628][INFO ][o.e.x.s.InitialNodeSecurityAutoConfiguration] [Prod] Auto-configuration will not generate a password for the elastic built-in superuser, as we cannot determine if there is a terminal attached to the elasticsearch process. You can use the `bin/elasticsearch-reset-password` tool to set the password for the elastic user.
[2022-11-09T20:37:52,724][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [Prod] [controller/96] [Main.cc#123] controller (64 bit): Version 8.5.0 (Build 3922fab346e761) Copyright (c) 2022 Elasticsearch BV
[2022-11-09T20:37:53,354][INFO ][o.e.t.n.NettyAllocator ] [Prod] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-11-09T20:37:53,381][INFO ][o.e.i.r.RecoverySettings ] [Prod] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2022-11-09T20:37:53,425][INFO ][o.e.d.DiscoveryModule ] [Prod] using discovery type [single-node] and seed hosts providers [settings]
[2022-11-09T20:37:54,888][INFO ][o.e.n.Node ] [Prod] initialized
[2022-11-09T20:37:54,889][INFO ][o.e.n.Node ] [Prod] starting ...
[2022-11-09T20:37:54,901][INFO ][o.e.x.s.c.f.PersistentCache] [Prod] persistent cache index loaded
[2022-11-09T20:37:54,903][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [Prod] deprecation component started
[2022-11-09T20:37:55,011][INFO ][o.e.t.TransportService ] [Prod] publish_address {10.0.2.100:9300}, bound_addresses {[::]:9300}
[2022-11-09T20:37:55,122][WARN ][o.e.b.BootstrapChecks ] [Prod] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2022-11-09T20:37:55,124][INFO ][o.e.c.c.ClusterBootstrapService] [Prod] this node has not joined a bootstrapped cluster yet; [cluster.initial_master_nodes] is set to [Prod]
[2022-11-09T20:37:55,133][INFO ][o.e.c.c.Coordinator ] [Prod] setting initial configuration to VotingConfiguration{CvroQFRsTxKqyWfwcOJGag}
[2022-11-09T20:37:55,327][INFO ][o.e.c.s.MasterService ] [Prod] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw} completing election], term: 1, version: 1, delta: master node changed {previous [], current [{Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}]}
[2022-11-09T20:37:55,352][INFO ][o.e.c.c.CoordinationState] [Prod] cluster UUID set to [_wcBh4-JRtuLqIBXyNhZ5A]
[2022-11-09T20:37:55,370][INFO ][o.e.c.s.ClusterApplierService] [Prod] master node changed {previous [], current [{Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
[2022-11-09T20:37:55,439][INFO ][o.e.r.s.FileSettingsService] [Prod] starting file settings watcher ...
[2022-11-09T20:37:55,447][INFO ][o.e.r.s.FileSettingsService] [Prod] file settings service up and running [tid=51]
[2022-11-09T20:37:55,456][INFO ][o.e.h.AbstractHttpServerTransport] [Prod] publish_address {10.0.2.100:9200}, bound_addresses {[::]:9200}
[2022-11-09T20:37:55,457][INFO ][o.e.n.Node ] [Prod] started {Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}{ml.max_jvm_size=1958739968, ml.allocated_processors_double=4.0, xpack.installed=true, ml.machine_memory=3917570048, ml.allocated_processors=4}
[2022-11-09T20:37:55,510][INFO ][o.e.g.GatewayService ] [Prod] recovered [0] indices into cluster_state
[2022-11-09T20:37:55,691][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [.watch-history-16] for index patterns [.watcher-history-16*]
[2022-11-09T20:37:55,700][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [ilm-history] for index patterns [ilm-history-5*]
[2022-11-09T20:37:55,707][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [.slm-history] for index patterns [.slm-history-5*]
[2022-11-09T20:37:55,718][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding component template [.deprecation-indexing-mappings]
[2022-11-09T20:37:55,723][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding component template [synthetics-mappings]
...
[2022-11-09T20:37:56,392][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [Prod] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
[2022-11-09T20:37:56,510][INFO ][o.e.l.LicenseService ] [Prod] license [4b5d6876-1402-470e-96fd-f9ff8211cca7] mode [basic] - valid
[2022-11-09T20:37:56,511][INFO ][o.e.x.s.a.Realms ] [Prod] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2022-11-09T20:37:56,538][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [Prod] Node [{Prod}{CvroQFRsTxKqyWfwcOJGag}] is selected as the current health node.
# and connection test is fine:
curl --cacert http_ca.crt -u elastic https://127.0.0.1:9200
Enter host password for user 'elastic':
{
"name" : "Prod",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "........",
"version" : {
"number" : "8.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "c94b4700cda13820dad5aa74fae6db185ca5c304",
"build_date" : "2022-10-24T16:54:16.433628434Z",
"build_snapshot" : false,
"lucene_version" : "9.4.1",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
Elasticsearch podman installation with bind-mount volumes (fails):
`podman run --detach --name es850 --publish 9200:9200
--volume=/etc/elasticsearch/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml :Z
--volume=/var/log/elasticsearch/elasticsearch.log:/var/log/elasticsearch/elasticsearch.log:Z
--volume=/app/elasticsearch/data:/app/elasticsearch/data:Z
--user=elasticadm localhost/elasticsearch_cust:1.4
podman logs es850
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
Aborting auto configuration because the node keystore contains password settings already
[2022-11-09T15:56:27,292][INFO ][o.e.n.Node ] [0d8414e9b51b] version[8.5.0], pid[76], build[rpm/c94b4700cda13820dad5aa74fae6db185ca5c304/2022-10-24T16:54:16.433628434Z], OS[Linux/4.18.0-348.7.1.el8_5.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/19/19+36-2238]
[2022-11-09T15:56:27,299][INFO ][o.e.n.Node ] [0d8414e9b51b] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-11-09T15:56:27,300][INFO ][o.e.n.Node ] [0d8414e9b51b] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-10492222574682252504, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1868m, -Xmx1868m, -XX:MaxDirectMemorySize=979369984, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-11-09T15:56:29,369][INFO ][c.a.c.i.j.JacksonVersion ] [0d8414e9b51b] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-11-09T15:56:30,863][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] loaded module [aggs-matrix-stats]
.............
[2022-11-09T15:56:30,880][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] loaded module [x-pack-watcher]
[2022-11-09T15:56:30,881][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] no plugins loaded
[2022-11-09T15:56:33,720][WARN ][stderr ] [0d8414e9b51b] Nov 09, 2022 3:56:33 PM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-09T15:56:33,721][WARN ][stderr ] [0d8414e9b51b] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-09T15:56:33,732][INFO ][o.e.e.NodeEnvironment ] [0d8414e9b51b] using [1] data paths, mounts [[/ (overlay)]], net usable_space [24gb], net total_space [27.8gb], types [overlay]
[2022-11-09T15:56:33,732][INFO ][o.e.e.NodeEnvironment ] [0d8414e9b51b] heap size [1.8gb], compressed ordinary object pointers [true]
[2022-11-09T15:56:33,740][INFO ][o.e.n.Node ] [0d8414e9b51b] node name [0d8414e9b51b], node ID [rMFgxntETo63opwgU7P9sg], cluster name [elasticsearch], roles [ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest, data_frozen]
**[2022-11-09T15:56:36,194][ERROR][o.e.b.Elasticsearch ] [0d8414e9b51b] fatal exception while booting Elasticsearch org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml : [xpack.security.transport.ssl.keystore.secure_password,xpack.security.transport.ssl.truststore.secure_password]**
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.validateServerConfiguration(SSLService.java:648)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.loadSslConfigurations(SSLService.java:612)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:156)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:465)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:314)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.lambda$new$15(Node.java:704)
at org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.lambda$flatMap$0(PluginsService.java:252)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575)
at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622)
at java.base/java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:719)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:316)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch.log
# Configuration is the following (elasticsearch.yml):
node.name: Prod # Name is 'Prod' but it's not a true production server
path.data: /app/elasticsearch/data
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
ingest.geoip.downloader.enabled: false
# Security:
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
http.host: 0.0.0.0
#transport.host: 0.0.0.0
$ podman exec -it es850 bash
[elasticadm#8a9ceb50b3b4 /]$ /usr/share/elasticsearch/bin/elasticsearch-keystore list
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
autoconfiguration.password_hash
keystore.seed
xpack.security.http.ssl.keystore.secure_password
xpack.security.transport.ssl.keystore.secure_password
xpack.security.transport.ssl.truststore.secure_password`
Any ideas / advise would be really appreciated because I don't know what's wrong suddenly with xpack.security parameters and the relationship with the podman bind-mount volume ?
These base xpack.security seem well configured (initial base configuration with no modification in a first time)

Connection refused elasticsearch

Trying to do a "curl http://localhost:9200" but getting "Failed connection refused" Firewalld is off and elasticsearch.yml file settings are set to default. Below is a portion of the yml file.
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/log/elasticsearch
#
# Path to log files:
#
path.logs: /var/data/elasticsearch
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
Below is a tail of the elasticsearch.log file:
[2018-03-29T07:06:02,094][INFO ][o.e.c.s.MasterService ] [TBin_UP] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}
[2018-03-29T07:06:02,105][INFO ][o.e.c.s.ClusterApplierService] [TBin_UP] new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}, reason: apply cluster state (from master [master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-03-29T07:06:02,148][INFO ][o.e.g.GatewayService ] [TBin_UP] recovered [0] indices into cluster_state
[2018-03-29T07:06:02,155][INFO ][o.e.h.n.Netty4HttpServerTransport] [TBin_UP] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-03-29T07:06:02,155][INFO ][o.e.n.Node ] [TBin_UP] started
[2018-03-29T07:06:02,445][INFO ][o.e.m.j.JvmGcMonitorService] [TBin_UP] [gc][14] overhead, spent [300ms] collecting in the last [1s]
[2018-03-29T07:14:50,259][INFO ][o.e.n.Node ] [TBin_UP] stopping ...
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] stopped
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] closing ...
[2018-03-29T07:14:50,620][INFO ][o.e.n.Node ] [TBin_UP] closed
Service status:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-29 08:05:46 EDT; 2min 38s ago
Docs: http://www.elastic.co
Process: 22384 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 22384 (code=exited, status=1/FAILURE)
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,668 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,670 main ERROR Unable to locate appender "rolling" for logger config "root"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logger config "index.indexing.slowlog.index"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger config "index.search.slowlog"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,672 main ERROR Unable to locate appender "deprecation_rolling" for logger config "org.elasticsearch.deprecation"
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Mar 29 08:05:46 satyr systemd[1]: Unit elasticsearch.service entered failed state.
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service failed.

Port forwarding with Elastic docker image

I'm trying to test docker out with this docker image. Things should be straight forward. But it isn't.
I ran this command to start the container:
sudo docker run -d -p 9200:9200 -p 9300:9300 elasticsearch -Des.node.name="ElasticTestNode"
Then I tried to run this command in my host machine:
# curl -XPUT "http://localhost:9200/movies/movie/3" -d'
{
"title": "To Kill a Mockingbird",
"director": "Robert Mulligan",
"year": 1962,
"genres": ["Crime", "Drama", "Mystery"]
}'
I was expecting to see some kind of successful message. Instead, the command simply stuck. No output and not stopping at all. I have to Ctrl-X to quit.
Ran out of idea, I started a bash shell inside the docker and tested:
sudo sudo docker exec -i -t some-docker-id /bin/bash
root#somehash:/usr/share/elasticsearch# curl -XPUT "http://localhost:9200/movies/movie/3" -d'
{
"title": "To Kill a Mockingbird",
"director": "Robert Mulligan",
"year": 1962,
"genres": ["Crime", "Drama", "Mystery"]
}'
{"_index":"movies","_type":"movie","_id":"3","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}root#somehash:/usr/share/elasticsearch#
And it was a success. What have I done wrong?
Updates: Tried another command on my host machine:
$ curl -XPUT -v "http://localhost:9200/movies/movie/3" -d'
{
"title": "To Kill a Mockingbird",
"director": "Robert Mulligan",
"year": 1962,
"genres": ["Crime", "Drama", "Mystery"]
}'
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9200 (#0)
> PUT /movies/movie/3 HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 139
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 139 out of 139 bytes
Stuck here...
# sudo docker logs docker-id
[2016-09-28 11:52:16,630][INFO ][node ] [ElasticTestNode] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-28 11:52:16,631][INFO ][node ] [ElasticTestNode] initializing ...
[2016-09-28 11:52:17,202][INFO ][plugins ] [ElasticTestNode] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-09-28 11:52:17,219][INFO ][env ] [ElasticTestNode] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda8)]], net usable_space [5.4gb], net total_space [19.5gb], spins? [possibly], types [ext4]
[2016-09-28 11:52:17,219][INFO ][env ] [ElasticTestNode] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-09-28 11:52:18,816][INFO ][node ] [ElasticTestNode] initialized
[2016-09-28 11:52:18,816][INFO ][node ] [ElasticTestNode] starting ...
[2016-09-28 11:52:18,877][INFO ][transport ] [ElasticTestNode] publish_address {172.17.0.22:9300}, bound_addresses {[::]:9300}
[2016-09-28 11:52:18,881][INFO ][discovery ] [ElasticTestNode] elasticsearch/LCo5k0dARimsWFXjN1Yu0A
[2016-09-28 11:52:21,915][INFO ][cluster.service ] [ElasticTestNode] new_master {ElasticTestNode}{LCo5k0dARimsWFXjN1Yu0A}{172.17.0.22}{172.17.0.22:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-09-28 11:52:21,924][INFO ][http ] [ElasticTestNode] publish_address {172.17.0.22:9200}, bound_addresses {[::]:9200}
[2016-09-28 11:52:21,925][INFO ][node ] [ElasticTestNode] started
[2016-09-28 11:52:21,960][INFO ][gateway ] [ElasticTestNode] recovered [0] indices into cluster_state
It seems that the port mapping of docker sometimes fail. I have experienced this issue multiple times. The same test script works on a boot while doesn't on another.
One thing consistence is that if things went bad on one boot, it stays fail every time I restart the docker image. It stays fail even after I ditched the container and start a new one with the image. It seems to be an issue of the docker daemon.
The way for me to solve this issue is to stop all container and restart the docker daemon:
sudo docker stop $(docker ps -a -q)
sudo systemctl restart docker
sudo docker start $(docker ps -a -q)
It works for me. Hope someone would find it helpful.

how to configure the elasticserch.yml for repository-hdfs plugin of elasticsearch

elasticsearch 2.3.2
repository-hdfs 2.3.1
I configure the elasticsearch.yml file as the elastic official
repositories
hdfs:
uri: "hdfs://<host>:<port>/" # optional - Hadoop file-system URI
path: "some/path" # required - path with the file-system where data is stored/loaded
load_defaults: "true" # optional - whether to load the default Hadoop configuration (default) or not
conf_location: "extra-cfg.xml" # optional - Hadoop
configuration XML to be loaded (use commas for multi values)
conf.<key> : "<value>" # optional - 'inlined' key=value added to the Hadoop configuration
concurrent_streams: 5 # optional - the number of concurrent streams (defaults to 5)
compress: "false" # optional - whether to compress the metadata or not (default)
chunk_size: "10mb" # optional - chunk size (disabled by default)
but it raise Exception ,the format is incorrect
error info :
Exception in thread "main" SettingsException
[Failed to load settings from [elasticsearch.yml]]; nested: ScannerException[while scanning a simple key'
in 'reader', line 99, column 2:
repositories
^
could not find expected ':'
in 'reader', line 100, column 10:
hdfs:
^];
Likely root cause: while scanning a simple key
in 'reader', line 99, column 2:
repositories
^
could not find expected ':'
in 'reader', line 100, column 10:
hdfs:
I edit it as:
repositories:
hdfs:
uri: "hdfs://191.168.4.220:9600/"
but it doesn't work
I want know what the format is.
I find the aws configure for elasticsearch.xml
cloud:
aws:
access_key: AKVAIQBF2RECL7FJWGJQ
secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br
repositories:
s3:
bucket: "bucket_name"
region: "us-west-2"
private-bucket:
bucket: <bucket not accessible by default key>
access_key: <access key>
secret_key: <secret key>
remote-bucket:
bucket: <bucket in other region>
region: <region>
external-bucket:
bucket: <bucket>
access_key: <access key>
secret_key: <secret key>
endpoint: <endpoint>
protocol: <protocol>
I imitate it,but still doesn't work
I try to install repository-hdfs 2.3.1 in elasticsearch 2.3.2 ,but failed :
ERROR: Plugin [repository-hdfs] is incompatible with Elasticsearch [2.3.2]. Was designed for version [2.3.1]
The plugin can be only installed in elasticsearch 2.3.1.
You should specify uri,path,conf_location option and maybe delete conf.key option. Take the following config as an example.
security.manager.enabled: false
repositories.hdfs:
uri: "hdfs://master:9000" # optional - Hadoop file-system URI
path: "/aaa/bbb" # required - path with the file-system where data is stored/loaded
load_defaults: "true" # optional - whether to load the default Hadoop configuration (default) or not
conf_location: "/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/core-site.xml,/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/hdfs-site.xml" # optional - Hadoop configuration XML to be loaded (use commas for multi values)
concurrent_streams: 5 # optional - the number of concurrent streams (defaults to 5)
compress: "false" # optional - whether to compress the metadata or not (default)
chunk_size: "10mb" # optional - chunk size (disabled by default)
I start es successfully:
[----#----------- elasticsearch-2.3.1]$ bin/elasticsearch
[2016-05-06 04:40:58,173][INFO ][node ] [Protector] version[2.3.1], pid[17641], build[bd98092/2016-04-04T12:25:05Z]
[2016-05-06 04:40:58,174][INFO ][node ] [Protector] initializing ...
[2016-05-06 04:40:58,830][INFO ][plugins ] [Protector] modules [reindex, lang-expression, lang-groovy], plugins [repository-hdfs], sites []
[2016-05-06 04:40:58,863][INFO ][env ] [Protector] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8gb], net total_space [9.9gb], spins? [unknown], types [rootfs]
[2016-05-06 04:40:58,863][INFO ][env ] [Protector] heap size [1007.3mb], compressed ordinary object pointers [true]
[2016-05-06 04:40:58,863][WARN ][env ] [Protector] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-05-06 04:40:59,192][INFO ][plugin.hadoop.hdfs ] Loaded Hadoop [1.2.1] libraries from file:/home/ec2-user/app/elasticsearch-2.3.1/plugins/repository-hdfs/
[2016-05-06 04:41:01,598][INFO ][node ] [Protector] initialized
[2016-05-06 04:41:01,598][INFO ][node ] [Protector] starting ...
[2016-05-06 04:41:01,823][INFO ][transport ] [Protector] publish_address {xxxxxxxxx:9300}, bound_addresses {xxxxxxx:9300}
[2016-05-06 04:41:01,830][INFO ][discovery ] [Protector] hdfs/9H8wli0oR3-Zp-M9ZFhNUQ
[2016-05-06 04:41:04,886][INFO ][cluster.service ] [Protector] new_master {Protector}{9H8wli0oR3-Zp-M9ZFhNUQ}{xxxxxxx}{xxxxx:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-05-06 04:41:04,908][INFO ][http ] [Protector] publish_address {xxxxxxxxx:9200}, bound_addresses {xxxxxxx:9200}
[2016-05-06 04:41:04,908][INFO ][node ] [Protector] started
[2016-05-06 04:41:05,415][INFO ][gateway ] [Protector] recovered [1] indices into cluster_state
[2016-05-06 04:41:06,097][INFO ][cluster.routing.allocation] [Protector] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[website][0], [website][0]] ...]).
But ,when i try to create a snapshot :
PUT /_snapshot/my_backup
{
"type": "hdfs",
"settings": {
"path":"/aaa/bbb/"
}
}
i get the following error:
Caused by: java.io.IOException: Mkdirs failed to create file:/aaa/bbb/tests-zTkKRtoZTLu3m3RLascc1w

issue while installing oracle 12c in linux machine - The installer gui popup window never appears

I am trying to install Oracle 12c in x86_64 x86_64 x86_64 GNU/Linux machine. This is my first time installation. I run the installer from the database folder using ./runInstaller -debug command. The output is as follows:
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 500 MB. Actual 14103 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3964 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-08-28_09-28-56AM. Please wait ...Archive: ../stage/Components/oracle.jdk/1.6.0.75.0/1/DataFiles/filegroup3.jar
inflating: /tmp/OraInstall2015-08-28_09-28-56AM/jdk/lib/ir.idl
inflating: /tmp/OraInstall2015-08-28_09-28-56AM/jdk/lib/sa-jdi.jar
...................
Archive: ../stage/Components/oracle.jdk/1.6.0.75.0/1/DataFiles/filegroup2.jar
.............
Archive: ../stage/Components/oracle.jdk/1.6.0.75.0/1/DataFiles/filegroup4.jar
.......
Archive: ../stage/Components/oracle.jdk/1.6.0.75.0/1/DataFiles/filegroup1.jar
............
Archive: ../stage/Components/oracle.jdk/1.6.0.75.0/1/DataFiles/filegroup5.jar
........
5 archives were successfully processed.
Archive: ../stage/Components/oracle.swd.oui/12.1.0.2.0/1/DataFiles/filegroup6.jar
...........
Archive: ../stage/Components/oracle.swd.oui/12.1.0.2.0/1/DataFiles/filegroup2.jar
..........
Archive: ../stage/Components/oracle.swd.oui/12.1.0.2.0/1/DataFiles/filegroup4.jar
............
Archive: ../stage/Components/oracle.swd.oui/12.1.0.2.0/1/DataFiles/filegroup7.jar
........
Archive: ../stage/Components/oracle.swd.oui/12.1.0.2.0/1/DataFiles/filegroup1.jar
.............
Archive: ../stage/Components/oracle.swd.oui/12.1.0.2.0/1/DataFiles/filegroup5.jar
....
6 archives were successfully processed.
Archive: ../stage/Components/oracle.swd.oui.core/12.1.0.2.0/1/DataFiles/filegroup3.jar
......
Archive: ../stage/Components/oracle.swd.oui.core/12.1.0.2.0/1/DataFiles/filegroup2.jar
........
Archive: ../stage/Components/oracle.swd.oui.core/12.1.0.2.0/1/DataFiles/filegroup4.jar
.........
Archive: ../stage/Components/oracle.swd.oui.core/12.1.0.2.0/1/DataFiles/filegroup1.jar
..........
Archive: ../stage/Components/oracle.swd.oui.core/12.1.0.2.0/1/DataFiles/filegroup5.jar
.....
5 archives were successfully processed.
Archive: ../stage/Components/oracle.swd.oui.core.min/12.1.0.2.0/1/DataFiles/filegroup2.jar
....
Archive: ../stage/Components/oracle.swd.oui.core.min/12.1.0.2.0/1/DataFiles/filegroup1.jar
......
2 archives were successfully processed.
LD_LIBRARY_PATH environment variable :
-------------------------------------------------------
Total args: 26
Command line argument array elements ...
Arg:0:/tmp/OraInstall2015-08-28_09-28-56AM/jdk/jre/bin/java:
Arg:1:-Doracle.installer.library_loc=/tmp/OraInstall2015-08-28_09-28-56AM/oui/lib/linux64:
Arg:2:-Doracle.installer.oui_loc=/tmp/OraInstall2015-08-28_09-28-56AM/oui:
Arg:3:-Doracle.installer.bootstrap=TRUE:
Arg:4:-Doracle.installer.startup_location=/oracle12c/database/install:
Arg:5:-Doracle.installer.jre_loc=/tmp/OraInstall2015-08-28_09-28-56AM/jdk/jre:
Arg:6:-Doracle.installer.nlsEnabled="TRUE":
Arg:7:-Doracle.installer.prereqConfigLoc= :
Arg:8:-Doracle.installer.unixVersion=2.6.32-279.el6.x86_64:
Arg:9:-Doracle.install.setup.workDir=/oracle12c/database:
Arg:10:-DCVU_OS_SETTINGS=SHELL_NOFILE_SOFT_LIMIT:1024,SHELL_UMASK:0022:
Arg:11:-Xms150m:
Arg:12:-Xmx256m:
Arg:13:-XX:MaxPermSize=128M:
Arg:14:-cp:
Arg:15:/tmp/OraInstall2015-08-28_09-28-56AM::/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/emca.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/entityManager_proxy.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/prov_fixup.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/orai18n-utility.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/installcommons_1.0.0b.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/wsclient_extended.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/instdb.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/jsch.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/remoteinterfaces.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/OraPrereqChecks.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/orai18n-mapping.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/instcommon.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/emCoreConsole.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/OraPrereq.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/cvu.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/ssh.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/ojdbc6.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/adf-share-ca.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/jmxspi.jar:/tmp/OraInstall2015-08-28_09-28-56AM/ext/jlib/javax.security.jacc_1.0.0.0_1-1.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/OraInstaller.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/oneclick.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/xmlparserv2.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/share.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/OraInstallerNet.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/emCfg.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/emocmutl.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/OraPrereq.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/jsch.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/ssh.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/remoteinterfaces.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/http_client.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/OraSuiteInstaller.jar:../stage/Components/oracle.swd.opatch/12.1.0.2.0/1/DataFiles/jlib/opatch.jar:../stage/Components/oracle.swd.opatch/12.1.0.2.0/1/DataFiles/jlib/opatchactions.jar:../stage/Components/oracle.swd.opatch/12.1.0.2.0/1/DataFiles/jlib/opatchprereq.jar:../stage/Components/oracle.swd.opatch/12.1.0.2.0/1/DataFiles/jlib/opatchutil.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/OraCheckPoint.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstImages.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_de.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_es.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_fr.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_it.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_ja.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_ko.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_pt_BR.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_zh_CN.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/InstHelp_zh_TW.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/oracle_ice.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/help-share.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/ohj.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/ewt3.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/ewt3-swingaccess.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/swingaccess.jar::/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/jewt4.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/orai18n-collation.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/orai18n-mapping.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/ojmisc.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/xml.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/srvm.jar:/tmp/OraInstall2015-08-28_09-28-56AM/oui/jlib/srvmasm.jar:
Arg:16:oracle.install.ivw.db.driver.DBInstaller:
Arg:17:-scratchPath:
Arg:18:/tmp/OraInstall2015-08-28_09-28-56AM:
Arg:19:-sourceLoc:
Arg:20:/oracle12c/database/install/../stage/products.xml:
Arg:21:-sourceType:
Arg:22:network:
Arg:23:-timestamp:
Arg:24:2015-08-28_09-28-56AM:
Arg:25:-debug:
-------------------------------------------------------
Initializing Java Virtual Machine from /tmp/OraInstall2015-08-28_09-28-56AM/jdk/jre/bin/java. Please wait...
[oracle#korbsbvmlx22 database]$ [main] [ 2015-08-28 09:29:05.048 IST ] [ClusterVerification.getInstance:426] Method Entry. workDir=/tmp frameworkHome=/oracle12c/database/install/../stage/cvu
[main] [ 2015-08-28 09:29:05.062 IST ] [ParamManager.<init>:668] m_paramInstantiated set to TRUE
[main] [ 2015-08-28 09:29:05.062 IST ] [VerificationUtil.getLocalHost:1312] Hostname retrieved: korbsbvmlx22, returned: korbsbvmlx22
[main] [ 2015-08-28 09:29:05.064 IST ] [VerificationUtil.getDestLoc:3712] ==== CV_DESTLOC(pre-fetched value): '/tmp/'
[main] [ 2015-08-28 09:29:05.065 IST ] [VerificationUtil.getExecutionEnvironment:7586] RDBMS Version is -->12.1.0.2.0
[main] [ 2015-08-28 09:29:05.065 IST ] [VerificationUtil.validateCmdLineExecEnvironment:7602] Entered validateCmdLineExecEnvironment
[main] [ 2015-08-28 09:29:05.105 IST ] [Version.isPre:610] version to be checked 12.1.0.2.0 major version to check against 10
[main] [ 2015-08-28 09:29:05.105 IST ] [Version.isPre:621] isPre.java: Returning FALSE
[main] [ 2015-08-28 09:29:05.106 IST ] [Version.isPre:610] version to be checked 12.1.0.2.0 major version to check against 10
[main] [ 2015-08-28 09:29:05.106 IST ] [Version.isPre:621] isPre.java: Returning FALSE
[main] [ 2015-08-28 09:29:05.107 IST ] [Version.isPre:610] version to be checked 12.1.0.2.0 major version to check against 11
[main] [ 2015-08-28 09:29:05.107 IST ] [Version.isPre:621] isPre.java: Returning FALSE
[main] [ 2015-08-28 09:29:05.107 IST ] [Version.isPre:642] version to be checked 12.1.0.2.0 major version to check against 11 minor version to check against 2
[main] [ 2015-08-28 09:29:05.108 IST ] [Version.isPre:651] isPre: Returning FALSE for major version check
[main] [ 2015-08-28 09:29:05.108 IST ] [UnixSystem.isHAConfigured:2788] olrFileName = /etc/oracle/olr.loc
[main] [ 2015-08-28 09:29:05.109 IST ] [VerificationUtil.isHAConfigured:4181] haConfigured=false
[main] [ 2015-08-28 09:29:05.109 IST ] [VerificationUtil.validateCmdLineExecEnvironment:7639] Exit validateCmdLineExecEnvironment
[main] [ 2015-08-28 09:29:05.116 IST ] [ConfigUtil.importConfig:97] ==== CVU config file: /oracle12c/database/install/../stage/cvu//cv/admin/cvu_config
[main] [ 2015-08-28 09:29:05.117 IST ] [ConfigUtil.importConfig:114] ==== Picked up config variable: cv_raw_check_enabled : TRUE
[main] [ 2015-08-28 09:29:05.118 IST ] [ConfigUtil.importConfig:114] ==== Picked up config variable: cv_sudo_binary_location : /usr/local/bin/sudo
[main] [ 2015-08-28 09:29:05.118 IST ] [ConfigUtil.importConfig:114] ==== Picked up config variable: cv_pbrun_binary_location : /usr/local/bin/pbrun
[main] [ 2015-08-28 09:29:05.119 IST ] [ConfigUtil.importConfig:114] ==== Picked up config variable: cv_assume_cl_version : 12.1
[main] [ 2015-08-28 09:29:05.119 IST ] [ConfigUtil.isDefined:200] ==== Is ORACLE_SRVM_REMOTESHELL defined? : false
[main] [ 2015-08-28 09:29:05.121 IST ] [Library.load:194] library.load
[main] [ 2015-08-28 09:29:05.122 IST ] [sPlatform.isHybrid:66] osName=Linux osArch=amd64 JVM=64 rc=false
[main] [ 2015-08-28 09:29:05.122 IST ] [Library.load:262] Property oracle.installer.library_loc is set to value=/tmp/OraInstall2015-08-28_09-28-56AM/oui/lib/linux64
[main] [ 2015-08-28 09:29:05.123 IST ] [Library.load:264] Loading library /tmp/OraInstall2015-08-28_09-28-56AM/oui/lib/linux64/libsrvm12.so
[main] [ 2015-08-28 09:29:05.124 IST ] [ConfigUtil.getConfiguredValue:182] ==== Fallback to env var 'ORACLE_SRVM_REMOTESHELL'=null
[main] [ 2015-08-28 09:29:05.125 IST ] [ConfigUtil.isDefined:200] ==== Is ORACLE_SRVM_REMOTECOPY defined? : false
[main] [ 2015-08-28 09:29:05.125 IST ] [ConfigUtil.getConfiguredValue:182] ==== Fallback to env var 'ORACLE_SRVM_REMOTECOPY'=null
As seen in debug messages after "Please wait..." no installer window opens and there are further messages that is displayed in the console and then it is always like that. I have verified the logs in /tmp directory and it appears to be clean. I am stuck with this issue since long and do not know how to proceed.
Please guide.

Resources