Corda node crashed after Artemis MessagingClient failed, "Artemis MessagingClient failed. Shutting down." - amqp

The following error occurs while running 2 nodes and a notary with CordaOSS4.3 (Amazon EFS is used for Artemis service of each node and notary).
・nodeA
[INFO ] 2021-03-24T01:53:33,526Z [nioEventLoopGroup-2-1] engine.ConnectionStateMachine. - Transport Error TransportImpl [_connectionEndpoint=org.apache.qpid.proton.engine.impl.ConnectionImpl#d8755f, org.apache.qpid.proton.engine.impl.TransportImpl#720cb721] {localLegalName=O=nodeA, L=Local, C=JP, remoteLegalName=O=nodeB, L=Local, C=JP, serverMode=false}
[INFO ] 2021-03-24T01:53:33,526Z [nioEventLoopGroup-2-1] engine.ConnectionStateMachine. - Error: connection aborted {localLegalName=O=nodeA, L=Local, C=JP, remoteLegalName=O=nodeB, L=Local, C=JP, serverMode=false}
[INFO ] 2021-03-24T01:53:33,527Z [nioEventLoopGroup-2-1] netty.AMQPClient. - Disconnected from [NLBendpoint]:10005
[INFO ] 2021-03-24T01:53:33,527Z [nioEventLoopGroup-2-1] netty.AMQPChannelHandler. - Closed client connection 828af8c0 from [NLBendpoint]:10005 to /xx.xx.x.xx:40438 {allowedRemoteLegalNames=O=nodeB, L=Local, C=JP, localCert=O=nodeA, L=Local, C=JP, remoteAddress=[NLBendpoint]:10005, remoteCert=O=nodeB, L=Local, C=JP, serverMode=false}
[INFO ] 2021-03-24T01:53:33,527Z [nioEventLoopGroup-2-1] bridging.AMQPBridgeManager$AMQPBridge. - Bridge Disconnected {legalNames=O=nodeB, L=Local, C=JP, maxMessageSize=10485760, queueName=internal.peers.DLB29JcZp4kCP2aGGZKGkhw2X5RenndTjEK4xy48iT9643, targets=[NLBendpoint]:10005}
[WARN ] 2021-03-24T01:55:59,747Z [Thread-17936 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2936f48a)] core.client. - AMQ212037: Connection failure has been detected: AMQ119014: Did not receive data from /xxx.0.0.1:53166 within the 60,000ms connection TTL. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
[WARN ] 2021-03-24T01:55:59,748Z [Thread-949 (ActiveMQ-client-global-threads)] core.client. - AMQ212037: Connection failure has been detected: AMQ119011: Did not receive data from server for org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection#6eb6efd1[ID=e834052e, local= /127.0.0.1:53170, remote=localhost/127.0.0.1:10008] [code=CONNECTION_TIMEDOUT]
[WARN ] 2021-03-24T01:55:59,751Z [Thread-948 (ActiveMQ-client-global-threads)] core.client. - AMQ212037: Connection failure has been detected: AMQ119011: Did not receive data from server for org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection#505dd5b8[ID=f1885302, local= /127.0.0.1:53166, remote=localhost/127.0.0.1:10008] [code=CONNECTION_TIMEDOUT]
[WARN ] 2021-03-24T01:55:59,751Z [Thread-950 (ActiveMQ-client-global-threads)] core.client. - AMQ212037: Connection failure has been detected: AMQ119011: Did not receive data from server for org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection#57579387[ID=718e48b8, local= /127.0.0.1:53168, remote=localhost/127.0.0.1:10008] [code=CONNECTION_TIMEDOUT]
[WARN ] 2021-03-24T01:55:59,774Z [nioEventLoopGroup-2-1] netty.AMQPChannelHandler. - Closing channel due to nonrecoverable exception AMQ119014: Timed out after waiting 30,000 ms for response when sending packet 68 {allowedRemoteLegalNames=O=nodeB, L=Local, C=JP, localCert=O=nodeA, L=Local, C=JP, remoteAddress=[NLBendpoint]:10005, remoteCert=O=nodeB, L=Local, C=JP, serverMode=false}
[INFO ] 2021-03-24T01:55:59,775Z [nioEventLoopGroup-2-1] netty.AMQPClient. - Retry connect to [NLBendpoint]:10005
[ERROR] 2021-03-24T01:55:59,779Z [Thread-612] errorAndTerminate. - ArtemisMessagingClient failed. Shutting down.
・notary
[INFO ] 2021-03-24T01:53:34,850Z [nioEventLoopGroup-2-4] engine.ConnectionStateMachine. - Transport Error TransportImpl [_connectionEndpoint=org.apache.qpid.proton.engine.impl.ConnectionImpl#1a1be565, org.apache.qpid.proton.engine.impl.TransportImpl#1e6940e2] {localLegalName=O=Notary1, L=Local, C=JP, remoteLegalName=O=nodeA, L=Local, C=JP, serverMode=false}
[INFO ] 2021-03-24T01:53:34,850Z [nioEventLoopGroup-2-4] engine.ConnectionStateMachine. - Error: connection aborted {localLegalName=O=Notary1, L=Local, C=JP, remoteLegalName=O=nodeA, L=Local, C=JP, serverMode=false}
[INFO ] 2021-03-24T01:53:34,851Z [nioEventLoopGroup-2-4] netty.AMQPClient. - Disconnected from [NLBendpoint]:10008
[INFO ] 2021-03-24T01:53:34,851Z [nioEventLoopGroup-2-4] netty.AMQPChannelHandler. - Closed client connection 9da3b393 from [NLBendpoint]:10008 to /xx.xx.x.xx:33438 {allowedRemoteLegalNames=O=nodeA, L=Local, C=JP, localCert=O=Notary1, L=Local, C=JP, remoteAddress=[NLBendpoint]:10008, remoteCert=O=nodeA, L=Local, C=JP, serverMode=false}
[INFO ] 2021-03-24T01:53:34,851Z [nioEventLoopGroup-2-4] bridging.AMQPBridgeManager$AMQPBridge. - Bridge Disconnected {legalNames=O=nodeA, L=Local, C=JP, maxMessageSize=10485760, queueName=internal.peers.DLHVntq87Ai3vLSuQzG8BoKcc2napU6aU3NPVFwiF73322, targets=[NLBendpoint]:10008}
[INFO ] 2021-03-24T01:54:03,123Z [nioEventLoopGroup-2-3] netty.AMQPClient. - Retry connect to [NLBendpoint]:10005
[WARN ] 2021-03-24T01:54:17,939Z [nioEventLoopGroup-2-2] netty.AMQPChannelHandler. - SSL Handshake timed out {allowedRemoteLegalNames=O=nodeA, L=Local, C=JP, localCert=null, remoteAddress=[NLBendpoint]:10008, remoteCert=null, serverMode=false}
[ERROR] 2021-03-24T01:54:17,939Z [nioEventLoopGroup-2-2] netty.AMQPChannelHandler. - Handshake failure handshake timed out {allowedRemoteLegalNames=O=nodeA, L=Local, C=JP, localCert=null, remoteAddress=[NLBendpoint]:10008, remoteCert=null, serverMode=false}
[INFO ] 2021-03-24T01:56:11,385Z [nioEventLoopGroup-2-2] netty.AMQPClient. - Retry connect to [NLBendpoint]:10005
[INFO ] 2021-03-24T01:56:11,392Z [nioEventLoopGroup-2-3] netty.AMQPClient. - Failed to connect to [NLBendpoint]:10005
[INFO ] 2021-03-24T01:56:13,393Z [nioEventLoopGroup-2-4] netty.AMQPClient. - Retry connect to [NLBendpoint]:10005
[INFO ] 2021-03-24T01:56:13,398Z [nioEventLoopGroup-2-1] netty.AMQPClient. - Failed to connect to [NLBendpoint]:10005
After these logs were output, the nodeA process was down. (The notary process is still running)
What could be the cause of this problem?
I suspect that the connection to the Artemis service has been lost as a result of some problem connecting to Amazon EFS because these are output in the OS log.
Mar 24 10:55:51 [serverName] stunnel: LOG5[4]: Connection reset: 1105153036 byte(s) sent to TLS, 839120060 byte(s) sent to socket
Mar 24 10:55:54 [serverName] stunnel: LOG5[5]: Service [efs] accepted connection from xxx.x.x.x:38710
Mar 24 10:55:54 [serverName] stunnel: LOG5[5]: s_connect: connected xx.xx.x.xx:2049
Mar 24 10:55:54 [serverName] stunnel: LOG5[5]: Service [efs] connected remote server from xx.xx.x.xx:51468
Mar 24 10:55:55 [serverName] stunnel: LOG5[5]: Certificate accepted at depth=0: CN=*.efs.ap-northeast-1.amazonaws.com
Mar 24 10:55:55 [serverName] stunnel: LOG3[5]: transfer: s_poll_wait: TIMEOUTclose exceeded: closing
Mar 24 10:55:55 [serverName] stunnel: LOG5[5]: Connection closed: 0 byte(s) sent to TLS, 0 byte(s) sent to socket
Mar 24 10:55:55 [serverName] stunnel: LOG5[6]: Service [efs] accepted connection from xxx.x.x.x:38716
Mar 24 10:55:55 [serverName] stunnel: LOG5[6]: s_connect: connected xx.xx.x.xx2049
Mar 24 10:55:55 [serverName] stunnel: LOG5[6]: Service [efs] connected remote server from xx.xx.x.xx:51474

I believe we talked about this on slack, but yeah if you start a corda node and it can't bind on the p2p port or p2pAddress. that could cause artemis errors like you're describing.
it might also be something strange going on in your network security group. Make sure you're able to get this working on your local machine and that the nodes can all ping / telnet each other on the ports that you expect.

Related

Kerberos problem: GSSException: No valid credentials provided

My application is sending data to Kafka, Kerberos is used for authentication. Everything works fine for around 20 days, then I get the following exception:
2020-01-07 22:22:08.481 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient : Initiating connection to node mkav2.dc.ex.com:9092 (id: 101 rack: null)
2020-01-07 22:22:08.481 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.security.authenticator.SaslClientAuthenticator : Set SASL client state to SEND_HANDSHAKE_REQUEST
2020-01-07 22:22:08.481 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.security.authenticator.SaslClientAuthenticator : Creating SaslClient: client=lpa/appX.dc.ex.com#DC.EX.COM;service=kafka;serviceHostname=mkav2.dc.ex.com;mechs=[GSSAPI]
2020-01-07 22:22:08.482 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.network.Selector : Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 101
2020-01-07 22:22:08.482 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.security.authenticator.SaslClientAuthenticator : Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE
2020-01-07 22:22:08.482 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient : Completed connection to node 101. Fetching API versions.
2020-01-07 22:22:08.484 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.security.authenticator.SaslClientAuthenticator : Set SASL client state to INITIAL
2020-01-07 22:22:08.484 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.network.Selector : Connection with mkav2.dc.ex.com/172.10.15.44 disconnected
javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTH_FAILED state.
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:298)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslToken(SaslClientAuthenticator.java:215)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:183)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:76)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:376)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:433)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.security.sasl.SaslException: GSS initiate failed
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:280)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:278)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:278)
... 9 common frames omitted
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 14 common frames omitted
2020-01-07 22:22:08.484 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient : Node 101 disconnected.
2020-01-07 22:22:08.484 WARN 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient : Connection to node 101 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
After restarting the application everything works fine for another 20 days or so and then I get the same exception again. These are the ticket properties in krb5.conf file:
ticket_lifetime = 86400
renew_lifetime = 604800
Any ideas on why this could be happening?

Connection refused elasticsearch

Trying to do a "curl http://localhost:9200" but getting "Failed connection refused" Firewalld is off and elasticsearch.yml file settings are set to default. Below is a portion of the yml file.
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/log/elasticsearch
#
# Path to log files:
#
path.logs: /var/data/elasticsearch
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
Below is a tail of the elasticsearch.log file:
[2018-03-29T07:06:02,094][INFO ][o.e.c.s.MasterService ] [TBin_UP] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}
[2018-03-29T07:06:02,105][INFO ][o.e.c.s.ClusterApplierService] [TBin_UP] new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}, reason: apply cluster state (from master [master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-03-29T07:06:02,148][INFO ][o.e.g.GatewayService ] [TBin_UP] recovered [0] indices into cluster_state
[2018-03-29T07:06:02,155][INFO ][o.e.h.n.Netty4HttpServerTransport] [TBin_UP] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-03-29T07:06:02,155][INFO ][o.e.n.Node ] [TBin_UP] started
[2018-03-29T07:06:02,445][INFO ][o.e.m.j.JvmGcMonitorService] [TBin_UP] [gc][14] overhead, spent [300ms] collecting in the last [1s]
[2018-03-29T07:14:50,259][INFO ][o.e.n.Node ] [TBin_UP] stopping ...
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] stopped
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] closing ...
[2018-03-29T07:14:50,620][INFO ][o.e.n.Node ] [TBin_UP] closed
Service status:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-29 08:05:46 EDT; 2min 38s ago
Docs: http://www.elastic.co
Process: 22384 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 22384 (code=exited, status=1/FAILURE)
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,668 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,670 main ERROR Unable to locate appender "rolling" for logger config "root"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logger config "index.indexing.slowlog.index"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger config "index.search.slowlog"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,672 main ERROR Unable to locate appender "deprecation_rolling" for logger config "org.elasticsearch.deprecation"
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Mar 29 08:05:46 satyr systemd[1]: Unit elasticsearch.service entered failed state.
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service failed.

Apache CAMEL + HTTPS REST API (post)

Im a newbie to apache camle and
Lately Ive been trying to make a post request to a HTTPS Rest API.
I have gone through many posts and documentation but still I couldnt get a gist of this.
Please find my code below
**
from("timer:aTimer?period=20s")
.process(ex->ex.getIn().setBody(
"{\n" +
" \"userId\": 777,\n" +
" \"title\": \"sample\",\n" +
" \"body\": \"my body\"\n" +
" }"
))
.setHeader(Exchange.HTTP_METHOD,constant("POST"))
.setHeader(Exchange.CONTENT_TYPE,constant("application/json"))
.to("restlet:https://jsonplaceholder.typicode.com/posts")
.log("${body}");**
Whenever I run my application im getting the below error.
Started
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is starting
INFO ManagedManagementStrategy - JMX is enabled
INFO DefaultTypeConverter - Type converters loaded (core: 192, classpath: 14)
INFO DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
Mar 05, 2018 3:20:45 PM org.restlet.ext.httpclient.HttpClientHelper start
INFO: Starting the Apache HTTP client
INFO DefaultCamelContext - Route: route1 started and consuming from: timer://aTimer?period=20s
INFO DefaultCamelContext - Total 1 routes, of which 1 are started
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) started in 0.879 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is shutting down
INFO DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 300 seconds)
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 300 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=3018]
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 299 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=4020]
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 298 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=5023]
Mar 05, 2018 3:20:51 PM org.restlet.ext.httpclient.internal.HttpMethodCall sendRequest
WARNING: An error occurred during the communication with the remote HTTP server.
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)
at sun.security.ssl.InputRecord.read(InputRecord.java:527)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at org.apache.http.conn.ssl.SSLSocketFactory.createLayeredSocket(SSLSocketFactory.java:573)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:557)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:414)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:134)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:339)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:363)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
WARN TimerConsumer - Error processing exchange. Exchange[ID-ubuntu-Latitude-6430U-1520243444162-0-1]. Caused by: [org.apache.camel.component.restlet.RestletOperationException - Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server]
org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
at org.apache.camel.component.restlet.RestletProducer.populateRestletProducerException(RestletProducer.java:304)
at org.apache.camel.component.restlet.RestletProducer$1.handle(RestletProducer.java:190)
at org.restlet.engine.adapter.ClientAdapter$1.handle(ClientAdapter.java:90)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:371)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
ERROR DefaultErrorHandler - Failed delivery for (MessageId: ID-ubuntu-Latitude-6430U-1520243444162-0-2 on ExchangeId: ID-ubuntu-Latitude-6430U-1520243444162-0-1). Exhausted after delivery attempt: 1 caught: org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[route1 ] [route1 ] [timer://aTimer?period=20s ] [ 5321]
[route1 ] [process1 ] [Processor#0x33ae3bf8 ] [ 4]
[route1 ] [setHeader1 ] [setHeader[CamelHttpMethod] ] [ 0]
[route1 ] [setHeader2 ] [setHeader[Content-Type] ] [ 0]
[route1 ] [to1 ] [restlet:https://jsonplaceholder.typicode.com/443:posts ] [ 5308]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
at org.apache.camel.component.restlet.RestletProducer.populateRestletProducerException(RestletProducer.java:304)
at org.apache.camel.component.restlet.RestletProducer$1.handle(RestletProducer.java:190)
at org.restlet.engine.adapter.ClientAdapter$1.handle(ClientAdapter.java:90)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:371)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Mar 05, 2018 3:20:52 PM org.restlet.ext.httpclient.HttpClientHelper stop
INFO: Stopping the HTTP client
INFO DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: timer://aTimer?period=20s
INFO DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 3 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) uptime 7.927 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is shutdown in 3.048 seconds
Please help me.. I've also tried to use Apache HTTP4 component but still no luck.

Spark on Yarn job failed with ExitCode:1 and stderr says "Can't find main class"

We tried to submit a simple SparkPI example onto Spark on Yarn. The bat is written as below:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 3 --driver-memory 4g --executor-memory 1g --executor-cores 1 .\examples\target\spark-examples_2.10-1.4.0.jar 10
pause
Our HDFS and Yarn works well. We are using Hadoop 2.7.0 and Spark 1.4.1. We have only 1 node that acts as both NameNode and DataNode.
When we execute it, it fails with log says the following:
2015-08-21 11:07:22,044 DEBUG [main] | ===============================================================================
2015-08-21 11:07:22,044 DEBUG [main] | Yarn AM launch context:
2015-08-21 11:07:22,044 DEBUG [main] | user class: org.apache.spark.examples.SparkPi
2015-08-21 11:07:22,044 DEBUG [main] | env:
2015-08-21 11:07:22,044 DEBUG [main] | CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__hadoop_conf__<CPS>{{PWD}}/__spark__.jar<CPS>%HADOOP_HOME%\etc\hadoop<CPS>%HADOOP_HOME%\share\hadoop\common\*<CPS>%HADOOP_HOME%\share\hadoop\common\lib\*<CPS>%HADOOP_HOME%\share\hadoop\mapreduce\*<CPS>%HADOOP_HOME%\share\hadoop\mapreduce\lib\*<CPS>%HADOOP_HOME%\share\hadoop\hdfs\*<CPS>%HADOOP_HOME%\share\hadoop\hdfs\lib\*<CPS>%HADOOP_HOME%\share\hadoop\yarn\*<CPS>%HADOOP_HOME%\share\hadoop\yarn\lib\*<CPS>%HADOOP_MAPRED_HOME%\share\hadoop\mapreduce\*<CPS>%HADOOP_MAPRED_HOME%\share\hadoop\mapreduce\lib\*
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_CACHE_FILES_FILE_SIZES -> 165181064,1420218
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1440062075415_0026
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_USER -> msrabi
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_MODE -> true
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1440126441200,1440126441575
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_CACHE_FILES -> hdfs://msra-sa-44:9000/user/msrabi/.sparkStaging/application_1440062075415_0026/spark-assembly-1.4.0-hadoop2.7.0.jar#__spark__.jar,hdfs://msra-sa-44:9000/user/msrabi/.sparkStaging/application_1440062075415_0026/spark-examples_2.10-1.4.0.jar#__app__.jar
2015-08-21 11:07:22,060 DEBUG [main] | resources:
2015-08-21 11:07:22,060 DEBUG [main] | __app__.jar -> resource { scheme: "hdfs" host: "msra-sa-44" port: 9000 file: "/user/msrabi/.sparkStaging/application_1440062075415_0026/spark-examples_2.10-1.4.0.jar" } size: 1420218 timestamp: 1440126441575 type: FILE visibility: PRIVATE
2015-08-21 11:07:22,060 DEBUG [main] | __spark__.jar -> resource { scheme: "hdfs" host: "msra-sa-44" port: 9000 file: "/user/msrabi/.sparkStaging/application_1440062075415_0026/spark-assembly-1.4.0-hadoop2.7.0.jar" } size: 165181064 timestamp: 1440126441200 type: FILE visibility: PRIVATE
2015-08-21 11:07:22,060 DEBUG [main] | __hadoop_conf__ -> resource { scheme: "hdfs" host: "msra-sa-44" port: 9000 file: "/user/msrabi/.sparkStaging/application_1440062075415_0026/__hadoop_conf__7908628615251032149.zip" } size: 82888 timestamp: 1440126441794 type: ARCHIVE visibility: PRIVATE
2015-08-21 11:07:22,060 DEBUG [main] | command:
2015-08-21 11:07:22,075 DEBUG [main] | {{JAVA_HOME}}/bin/java -server -Xmx4096m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.app.name=org.apache.spark.examples.SparkPi' '-Dspark.executor.memory=1g' '-Dspark.driver.memory=4g' '-Dspark.master=yarn-cluster' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.SparkPi' --jar file:/D:/sp/./examples/target/spark-examples_2.10-1.4.0.jar --arg '10' --executor-memory 1024m --executor-cores 1 --num-executors 3 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
2015-08-21 11:07:22,075 DEBUG [main] | ===============================================================================
...........(omitting some lines)......
2015-08-21 11:07:23,231 INFO [main] | Application report for application_1440062075415_0026 (state: ACCEPTED)
2015-08-21 11:07:23,247 DEBUG [main] |
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1440126442169
final status: UNDEFINED
tracking URL: http://msra-sa-44:8088/proxy/application_1440062075415_0026/
user: msrabi
2015-08-21 11:07:24,263 TRACE [main] | 1: Call -> MSRA-SA-44/10.190.173.181:8032: getApplicationReport {application_id { id: 26 cluster_timestamp: 1440062075415 }}
2015-08-21 11:07:24,263 DEBUG [IPC Parameter Sending Thread #0] | IPC Client (443384617) connection to MSRA-SA-44/10.190.173.181:8032 from msrabi sending #37
2015-08-21 11:07:24,263 DEBUG [IPC Client (443384617) connection to MSRA-SA-44/10.190.173.181:8032 from msrabi] | IPC Client (443384617) connection to MSRA-SA-44/10.190.173.181:8032 from msrabi got value #37
2015-08-21 11:07:24,263 DEBUG [main] | Call: getApplicationReport took 0ms
2015-08-21 11:07:24,263 TRACE [main] | 1: Response <- MSRA-SA-44/10.190.173.181:8032: getApplicationReport {application_report { applicationId { id: 26 cluster_timestamp: 1440062075415 } user: "msrabi" queue: "default" name: "org.apache.spark.examples.SparkPi" host: "N/A" rpc_port: -1 yarn_application_state: ACCEPTED trackingUrl: "http://msra-sa-44:8088/proxy/application_1440062075415_0026/" diagnostics: "" startTime: 1440126442169 finishTime: 0 final_application_status: APP_UNDEFINED app_resource_Usage { num_used_containers: 1 num_reserved_containers: 0 used_resources { memory: 4608 virtual_cores: 1 } reserved_resources { memory: 0 virtual_cores: 0 } needed_resources { memory: 4608 virtual_cores: 1 } memory_seconds: 0 vcore_seconds: 0 } originalTrackingUrl: "N/A" currentApplicationAttemptId { application_id { id: 26 cluster_timestamp: 1440062075415 } attemptId: 1 } progress: 0.0 applicationType: "SPARK" }}
2015-08-21 11:07:24,263 INFO [main] | Application report for application_1440062075415_0026 (state: ACCEPTED)
.......(omitting some lines where the state are all ACCEPTED and final status are all UNDEFINED).....
2015-08-21 11:07:30,359 INFO [main] | Application report for application_1440062075415_0026 (state: FAILED)
2015-08-21 11:07:30,359 DEBUG [main] |
client token: N/A
diagnostics: Application application_1440062075415_0026 failed 2 times due to AM Container for appattempt_1440062075415_0026_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://msra-sa-44:8088/cluster/app/application_1440062075415_0026Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1440062075415_0026_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Shell output: 1 file(s) moved.
And then we opened stderr, it says:
Error: Could not find or load main class 'Dspark.app.name=org.apache.spark.examples.SparkPi'
It's so strange, this should be a parameter passed to java, and it seems that java recognized it as the main class. There should be a main class parameter in the command section of the log, but there is not.
How can that happen? What should we do to know what's wrong with it?
Thank you!
We solved this problem.
The root cause is that when generating the java command line, our Spark uses single quote('-Dxxxx') to wrap the parameters. Single quote works only in Linux. On Windows, the parameters are either not wrapped, or wrapped with double quotes("-Dxxxx"). The only way to solve this is to edit the source code of Spark and re-compile it.
It seems that this is currently an issue of Spark. (https://issues.apache.org/jira/browse/SPARK-5754)

ins-20802 - oracle net configuration assistant failed during installation - centos 7

Hello I am trying to folow the manual for installing the Oracle 12c. Actually it was already installed on the machine, and then deinstalled.
During installiation I get the "[ins-20802] oracle net configuration assistant failed during installation" error window. And proposed detail log file, where I can see:
INFO: ... GenericInternalPlugIn: starting read loop.
INFO: Read:
WARNING: Skipping line:
INFO: End of argument passing to stdin
INFO: Read: Parsing command line arguments:
WARNING: Skipping line: Parsing command line arguments:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "orahome" = /u01/app/oracle/product/12.1.0/db_1
WARNING: Skipping line: Parameter "orahome" = /u01/app/oracle/product/12.1.0/db_1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "orahnam" = OraDB12Home1
WARNING: Skipping line: Parameter "orahnam" = OraDB12Home1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "instype" = typical
WARNING: Skipping line: Parameter "instype" = typical
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "inscomp" = client,oraclenet,javavm,server,ano
WARNING: Skipping line: Parameter "inscomp" = client,oraclenet,javavm,server,ano
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "insprtcl" = tcp
WARNING: Skipping line: Parameter "insprtcl" = tcp
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "cfg" = local
WARNING: Skipping line: Parameter "cfg" = local
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "authadp" = NO_VALUE
WARNING: Skipping line: Parameter "authadp" = NO_VALUE
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "responsefile" = /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
WARNING: Skipping line: Parameter "responsefile" = /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "silent" = true
WARNING: Skipping line: Parameter "silent" = true
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Parameter "ouiinternal" = true
WARNING: Skipping line: Parameter "ouiinternal" = true
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Done parsing command line arguments.
WARNING: Skipping line: Done parsing command line arguments.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Services Configuration:
WARNING: Skipping line: Oracle Net Services Configuration:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Profile configuration complete.
WARNING: Skipping line: Profile configuration complete.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Listener Startup:
WARNING: Skipping line: Oracle Net Listener Startup:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Running Listener Control:
WARNING: Skipping line: Running Listener Control:
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: /u01/app/oracle/product/12.1.0/db_1/bin/lsnrctl start LISTENER
WARNING: Skipping line: /u01/app/oracle/product/12.1.0/db_1/bin/lsnrctl start LISTENER
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Listener Control complete.
WARNING: Skipping line: Listener Control complete.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Listener start failed.
WARNING: Skipping line: Listener start failed.
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Check the trace file for details: /u01/app/oracle/cfgtoollogs/netca/trace_OraDB12Home1-1504033PM3901.log
WARNING: Skipping line: Check the trace file for details: /u01/app/oracle/cfgtoollogs/netca/trace_OraDB12Home1-1504033PM3901.log
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Read: Oracle Net Services configuration failed. The exit code is 1
WARNING: Skipping line: Oracle Net Services configuration failed. The exit code is 1
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
INFO: Completed Plugin named: Oracle Net Configuration Assistant
Ans the corresponding trace_OraDB12Home1-1504033PM3901.log:
[main] [ 2015-04-03 15:39:06.329 MSK ] [OracleHome.getVersion:1059] Current Version From Inventory: 12.1.0.2.0
[main] [ 2015-04-03 15:39:06.329 MSK ] [InitialSetup.<init>:4151] Admin location is: /u01/app/oracle/product/12.1.0/db_1/network/admin
[main] [ 2015-04-03 15:39:06.718 MSK ] [ConfigureProfile.setProfileParam:140] Setting NAMES.DIRECTORY_PATH: (TNSNAMES, EZCONNECT)
[main] [ 2015-04-03 15:39:06.735 MSK ] [HAUtils.getCurrentOracleHome:593] Oracle home from system property: /u01/app/oracle/product/12.1.0/db_1
[main] [ 2015-04-03 15:39:06.735 MSK ] [HAUtils.getConfiguredGridHome:1343] ----- Getting CRS HOME ----
[main] [ 2015-04-03 15:39:06.737 MSK ] [UnixSystem.getCRSHome:2878] olrFileName = /etc/oracle/olr.loc
[main] [ 2015-04-03 15:39:06.795 MSK ] [HAUtils.getHASHome:1500] Failed to get HAS home.
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
[main] [ 2015-04-03 15:39:06.795 MSK ] [InitialSetup.checkHAConfiguration:4808] HA Server is NOT configured.
[main] [ 2015-04-03 15:39:06.797 MSK ] [NetCAResponseFile.<init>:75] Response file initialized: /u01/app/oracle/product/12.1.0/db_1/network/install/netca_typ.rsp
[main] [ 2015-04-03 15:39:06.798 MSK ] [NetCAResponseFile.getInstalledComponents:114] Installed components from response file: server, net8, javavm
[main] [ 2015-04-03 15:39:06.798 MSK ] [NetCAResponseFile.getVirtualHost:171] Virtual Host from response file: null
[main] [ 2015-04-03 15:39:06.799 MSK ] [SilentConfigure.performSilentConfigure:198] Typical profile configuration.
[main] [ 2015-04-03 15:39:06.801 MSK ] [ConfigureProfile.setProfileParam:140] Setting NAMES.DIRECTORY_PATH: (TNSNAMES, EZCONNECT)
[main] [ 2015-04-03 15:39:06.802 MSK ] [SilentConfigure.performSilentConfigure:206] Typical listener configuration.
[main] [ 2015-04-03 15:39:06.839 MSK ] [ConfigureListener.isHASConfigured:1596] Calling SRVM api to check if Oracle Restart is configured ...
[main] [ 2015-04-03 15:39:06.840 MSK ] [HAUtils.getCurrentOracleHome:593] Oracle home from system property: /u01/app/oracle/product/12.1.0/db_1
[main] [ 2015-04-03 15:39:06.840 MSK ] [HAUtils.getConfiguredGridHome:1343] ----- Getting CRS HOME ----
[main] [ 2015-04-03 15:39:06.840 MSK ] [UnixSystem.getCRSHome:2878] olrFileName = /etc/oracle/olr.loc
[main] [ 2015-04-03 15:39:06.841 MSK ] [HAUtils.getHASHome:1500] Failed to get HAS home.
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
[main] [ 2015-04-03 15:39:06.841 MSK ] [ConfigureListener.isHASConfigured:1607] Is Oracle Restart configured: false
[main] [ 2015-04-03 15:39:06.841 MSK ] [ConfigureListener.isHASRunning:1636] Is Oracle Restart running: false
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.listenerExists:396] Is listener "LISTENER" already exists: false
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.typicalConfigure:257] Checking for free port in range: 1521-1540
[main] [ 2015-04-03 15:39:06.842 MSK ] [ConfigureListener.validateEndPoint:1059] Validating end-point: TCP:1521
[main] [ 2015-04-03 15:39:06.944 MSK ] [ConfigureListener.isPortFree:1131] Checking if port 1521 is free on local machine...
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1146] InetAddress.getByName(127.0.0.1): /127.0.0.1
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1148] Local host IP address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1150] Local host name: localhost.localdomain
[main] [ 2015-04-03 15:39:06.945 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/127.0.0.1, Is IPv6 Address: false
[main] [ 2015-04-03 15:39:06.946 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/127.0.0.1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:06.946 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is IPv6 Address: true
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:06.968 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/0:0:0:0:0:0:0:1
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1209] Creating ServerSocket on Port:1521, Local IP Address: /127.0.0.1
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1213] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.969 MSK ] [ConfigureListener.isPortFree:1219] Creating ServerSocket on Port:1521
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.isPortFree:1222] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.isPortFree:1242] Returning is Port 1521 free: true
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.validateEndPoint:1114] Validation...Complete for TCP/TCPS.
[main] [ 2015-04-03 15:39:06.970 MSK ] [ConfigureListener.typicalConfigure:274] Using port: 1521
[main] [ 2015-04-03 15:39:08.684 MSK ] [ConfigureListener.isPortFree:1131] Checking if port 1521 is free on local machine...
[main] [ 2015-04-03 15:39:08.685 MSK ] [ConfigureListener.isPortFree:1146] InetAddress.getByName(127.0.0.1): /127.0.0.1
[main] [ 2015-04-03 15:39:08.686 MSK ] [ConfigureListener.isPortFree:1148] Local host IP address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:08.686 MSK ] [ConfigureListener.isPortFree:1150] Local host name: localhost.localdomain
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/127.0.0.1, Is IPv6 Address: false
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/127.0.0.1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:08.687 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/127.0.0.1
[main] [ 2015-04-03 15:39:08.688 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.688 MSK ] [ConfigureListener.isPortFree:1166] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is IPv6 Address: true
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1169] IP Address: localhost.localdomain/0:0:0:0:0:0:0:1, Is Link-Local Address: false
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1194] Creating ServerSocket on Port:1521, IP Address: localhost.localdomain/0:0:0:0:0:0:0:1
[main] [ 2015-04-03 15:39:08.689 MSK ] [ConfigureListener.isPortFree:1197] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.690 MSK ] [ConfigureListener.isPortFree:1209] Creating ServerSocket on Port:1521, Local IP Address: /127.0.0.1
[main] [ 2015-04-03 15:39:08.690 MSK ] [ConfigureListener.isPortFree:1213] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.691 MSK ] [ConfigureListener.isPortFree:1219] Creating ServerSocket on Port:1521
[main] [ 2015-04-03 15:39:08.691 MSK ] [ConfigureListener.isPortFree:1222] Created ServerSocket successfully.
[main] [ 2015-04-03 15:39:08.692 MSK ] [ConfigureListener.isPortFree:1242] Returning is Port 1521 free: true
Maybe problem is because:
PRCI-1144 : Failed to retrieve Oracle Grid Infrastructure home path
PRKC-1144 : File "/etc/oracle/olr.loc" not found.
Any ideas what I am dooing wrong and how finally install the Oracle?
I found the reason for this exception. If somebody will face the same problem just create /etc/oracle folder and give to it 777 permissions. For me it helped
I also got error "[INS-20802] Oracle Net Configuration Assistant failed" while installing Oracle 12c (12.2.0.1.4) on Centos7.
In my case the error went away after adding an entry in the /etc/hosts file with the hostname and its local network IP.
After that change the installation was able to finish successfully.
Resulting /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.100 centos100
777 is not the solution, it is making your system vulnerable. As suggested in oracle docs, the dir privileges should be 775.
For me in Windows 10 the solution was to install Microsoft Visual C++ 2010 Redistributable Package (x86)

Resources