HiveServer2 is unable to start on hdp - hadoop

In my project, we are using hdp apache License version 2. All service are running and health is also good but hiveserver2 is not starting. When I try to start, it fail again and again.
hiverserver.log
same ERROR generating again and again.
2022-03-08T09:07:27,690 ERROR [main]: metrics2.CodahaleMetrics (:()) - Unable to instantiate using constructor(MetricRegistry, HiveConf) for reporter org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter from conf HIVE_CODAHALE_METRICS_REPORTER_CLASSES
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_171]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_171]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_171]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_171]
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initCodahaleMetricsReporterClasses(CodahaleMetrics.java:429) ~[hive-common-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initReporting(CodahaleMetrics.java:396) ~[hive-common-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.<init>(CodahaleMetrics.java:196) ~[hive-common-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_171]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_171]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_171]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_171]
at org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42) ~[hive-common-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:206) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1072) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at org.apache.hive.service.server.HiveServer2.access$1700(HiveServer2.java:135) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1341) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1185) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_171]
at org.apache.hadoop.util.RunJar.run(RunJar.java:239) ~[hadoop-common-3.0.0.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:153) ~[hadoop-common-3.0.0.jar:?]
Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source hiveserver2 already exists!
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152) ~[hadoop-common-3.0.0.jar:?]
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125) ~[hadoop-common-3.0.0.jar:?]
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229) ~[hadoop-common-3.0.0.jar:?]
at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.<init>(HadoopMetrics2Reporter.java:206) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.<init>(HadoopMetrics2Reporter.java:62) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter$Builder.build(HadoopMetrics2Reporter.java:162) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
at org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter.<init>(Metrics2Reporter.java:45) ~[hive-common-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
... 23 more
TASK Logs
2022-03-09 14:13:34,273 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xxxxcn001.pin0c06.hcloud.io:2181,xxxxmn001.pin0c06.hcloud.io:2181,xxxxdn001.pin0c06.hcloud.io:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2022-03-09 14:13:34,853 - call returned (1, '')
2022-03-09 14:13:34,854 - Will retry 29 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2022-03-09 14:13:44,865 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xxxxcn001.pin0c06.hcloud.io:2181,xxxxmn001.pin0c06.hcloud.io:2181,xxxxdn001.pin0c06.hcloud.io:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2022-03-09 14:13:45,463 - call returned (1, '')
2022-03-09 14:13:45,465 - Will retry 28 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2022-03-09 14:13:55,474 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xxxxcn001.pin0c06.hcloud.io:2181,xxxxmn001.pin0c06.hcloud.io:2181,xxxxdn001.pin0c06.hcloud.io:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2022-03-09 14:13:56,050 - call returned (1, '')
2022-03-09 14:13:56,050 - Will retry 27 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2022-03-09 14:14:06,055 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xxxxcn001.pin0c06.hcloud.io:2181,xxxxmn001.pin0c06.hcloud.io:2181,xxxxdn001.pin0c06.hcloud.io:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2022-03-09 14:14:06,634 - call returned (1, '')
2022-03-09 14:14:06,635 - Will retry 26 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2022-03-09 14:14:16,646 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xxxxcn001.pin0c06.hcloud.io:2181,xxxxmn001.pin0c06.hcloud.io:2181,xxxxdn001.pin0c06.hcloud.io:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2022-03-09 14:14:17,230 - call returned (1, '')
2022-03-09 14:14:17,231 - Will retry 25 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2022-03-09 14:14:27,234 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xxxxcn001.pin0c06.hcloud.io:2181,xxxxmn001.pin0c06.hcloud.io:2181,xxxxdn001.pin0c06.hcloud.io:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2022-03-09 14:14:27,824 - call returned (1, '')
2022-03-09 14:14:27,825 - Will retry 24 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2022-03-09 14:14:37,826 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xxxxcn001.pin0c06.hcloud.io:2181,xxxxmn001.pin0c06.hcloud.io:2181,xxxxdn001.pin0c06.hcloud.io:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2022-03-09 14:14:38,405 - call returned (1, '')
2022-03-09 14:14:38,405 - Will retry 23 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2022-03-09 14:14:48,415 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xxxxcn001.pin0c06.hcloud.io:2181,xxxxmn001.pin0c06.hcloud.io:2181,xxxxdn001.pin0c06.hcloud.io:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2022-03-09 14:14:49,014 - call returned (1, '')

Related

ZAP API Scan failing with error Read timed out

I am able to do an API scan as well as generate a report when I run the below command from Windows :
docker run -v "$(pwd):/zap/wrk/:rw" -t owasp/zap2docker-weekly zap-api-scan.py -t http://10.170.170.170:1700 /account?field4=448808888888"&"field7=GENERIC01"&"field10=ABC076 -f openapi -r ZAP_Report.htm
Once I switch to running the same command :
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-weekly zap-api-scan.py -t http://10.170.170.170:1700/account?field4=448808888888"&"field7=GENERIC01"&"field10=DCF43 -f openapi -r ~/serverkeys/ZAP_REPORT.htm
from Debian I get an error, not quite sure what I'm missing :
.....
[ZAP-ActiveScanner-1] WARN org.zaproxy.zap.extension.ascanrules.CommandInjectionScanRule - Command Injection vulnerability check failed for parameter [field10] and payload [';cat /etc/passwd;'] due to an I/O error
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[?:?]
at java.net.SocketInputStream.socketRead(SocketInputStream.java:115) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:168) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:271) ~[?:?]
at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78) ~[commons-httpclient-3.1.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106) ~[commons-httpclient-3.1.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1153) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413) ~[commons-httpclient-3.1.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:2138) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.zaproxy.zap.ZapGetMethod.readResponse(ZapGetMethod.java:112) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1162) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:470) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:207) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) ~[commons-httpclient-3.1.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.executeMethod(HttpSender.java:430) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.runMethod(HttpSender.java:672) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.send(HttpSender.java:627) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.sendAuthenticated(HttpSender.java:602) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.sendAuthenticated(HttpSender.java:585) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.sendAndReceive(HttpSender.java:490) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractPlugin.sendAndReceive(AbstractPlugin.java:315) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractPlugin.sendAndReceive(AbstractPlugin.java:246) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.zaproxy.zap.extension.ascanrules.CommandInjectionScanRule.testCommandInjection(CommandInjectionScanRule.java:524) [ascanrules-release-42.zap:?]
at org.zaproxy.zap.extension.ascanrules.CommandInjectionScanRule.scan(CommandInjectionScanRule.java:431) [ascanrules-release-42.zap:?]
at org.parosproxy.paros.core.scanner.AbstractAppParamPlugin.scan(AbstractAppParamPlugin.java:201) [zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractAppParamPlugin.scan(AbstractAppParamPlugin.java:126) [zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractAppParamPlugin.scan(AbstractAppParamPlugin.java:87) [zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractPlugin.run(AbstractPlugin.java:333) [zap-D-2021-10-25.jar:D-2021-10-25]
at java.lang.Thread.run(Thread.java:829) [?:?]
493852 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.4.117:8002 | CommandInjectionScanRule in 421.201s with 84 message(s) sent and 0 alert(s) raised.
493853 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - start host http://10.170.170.170:1700 | DirectoryBrowsingScanRule strength MEDIUM threshold MEDIUM
493988 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.170.170:1700 | DirectoryBrowsingScanRule in 0.136s with 2 message(s) sent and 0 alert(s) raised.
493988 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - start host http://10.170.170.170:1700 | BufferOverflowScanRule strength MEDIUM threshold MEDIUM
494126 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.170.170:1700 | BufferOverflowScanRule in 0.137s with 3 message(s) sent and 0 alert(s) raised.
494126 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - start host http://10.170.170.170:1700 | FormatStringScanRule strength MEDIUM threshold MEDIUM
494287 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.170.170:1700 | FormatStringScanRule in 0.161s with 9 message(s) sent and 0 alert(s) raised.
494287 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - start host http://10.170.170.170:1700 | CrlfInjectionScanRule strength MEDIUM threshold MEDIUM
494560 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.170.170:1700 | CrlfInjectionScanRule in 0.273s with 21 message(s) sent and 0 alert(s) raised.
........
........
Is they any additional tracing I can do on the scan - why its timing out?
It appears the scan is terminating before completing and its also pointing to /etc/passwd ??
You are not necessarily missing anything.
ZAP typically makes loads of requests to the target. Some of those may timeout - thats all this warning is telling you. If you keep getting these then it might be an indication that your site has become unresponsive.

Kerberos problem: GSSException: No valid credentials provided

My application is sending data to Kafka, Kerberos is used for authentication. Everything works fine for around 20 days, then I get the following exception:
2020-01-07 22:22:08.481 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient : Initiating connection to node mkav2.dc.ex.com:9092 (id: 101 rack: null)
2020-01-07 22:22:08.481 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.security.authenticator.SaslClientAuthenticator : Set SASL client state to SEND_HANDSHAKE_REQUEST
2020-01-07 22:22:08.481 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.security.authenticator.SaslClientAuthenticator : Creating SaslClient: client=lpa/appX.dc.ex.com#DC.EX.COM;service=kafka;serviceHostname=mkav2.dc.ex.com;mechs=[GSSAPI]
2020-01-07 22:22:08.482 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.network.Selector : Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 101
2020-01-07 22:22:08.482 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.security.authenticator.SaslClientAuthenticator : Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE
2020-01-07 22:22:08.482 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient : Completed connection to node 101. Fetching API versions.
2020-01-07 22:22:08.484 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.security.authenticator.SaslClientAuthenticator : Set SASL client state to INITIAL
2020-01-07 22:22:08.484 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.common.network.Selector : Connection with mkav2.dc.ex.com/172.10.15.44 disconnected
javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTH_FAILED state.
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:298)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslToken(SaslClientAuthenticator.java:215)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:183)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:76)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:376)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:433)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.security.sasl.SaslException: GSS initiate failed
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:280)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:278)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:278)
... 9 common frames omitted
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 14 common frames omitted
2020-01-07 22:22:08.484 DEBUG 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient : Node 101 disconnected.
2020-01-07 22:22:08.484 WARN 24987 --- [fka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient : Connection to node 101 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
After restarting the application everything works fine for another 20 days or so and then I get the same exception again. These are the ticket properties in krb5.conf file:
ticket_lifetime = 86400
renew_lifetime = 604800
Any ideas on why this could be happening?

Apache Atlas quickstart - kafka error

Env: no kerberos, no ranger, no hdfs. EC2 with ssl.
Getting this error after running $ATLAS_HOME/bin/quick_start.py https://$componentPrivateDNSRecord:21443 with correct user/pass
Creating sample types:
Created type [DB]
Created type [Table]
Created type [StorageDesc]
Created type [Column]
Created type [LoadProcess]
Created type [View]
Created type [JdbcAccess]
Created type [ETL]
Created type [Metric]
Created type [PII]
Created type [Fact]
Created type [Dimension]
Created type [Log Data]
Creating sample entities:
Exception in thread "main" com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
at com.sun.jersey.api.client.filter.HTTPBasicAuthFilter.handle(HTTPBasicAuthFilter.java:105)
at com.sun.jersey.api.client.Client.handle(Client.java:652)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.method(WebResource.java:634)
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:334)
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:311)
at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:199)
at org.apache.atlas.AtlasClientV2.createEntity(AtlasClientV2.java:277)
at org.apache.atlas.examples.QuickStartV2.createInstance(QuickStartV2.java:339)
at org.apache.atlas.examples.QuickStartV2.createDatabase(QuickStartV2.java:362)
at org.apache.atlas.examples.QuickStartV2.createEntities(QuickStartV2.java:268)
at org.apache.atlas.examples.QuickStartV2.runQuickstart(QuickStartV2.java:150)
at org.apache.atlas.examples.QuickStartV2.main(QuickStartV2.java:132)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:940)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:347)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
... 14 more
No sample data added to Apache Atlas Server.
Relevant code:
https://github.com/apache/incubator-atlas/blob/master/webapp/src/main/java/org/apache/atlas/examples/QuickStartV2.java
#This works
quickStartV2.createTypes();
#This errors
quickStartV2.createEntities();
First I thought atlas->kafka connectivity was issue but then I see:
[ec2-user#ip-10-160-187-181 logs]$ cat atlas_kafka_setup.log
2018-07-25 00:06:14,923 INFO - [main:] ~ Looking for atlas-application.properties in classpath (ApplicationProperties:78)
2018-07-25 00:06:14,926 INFO - [main:] ~ Loading atlas-application.properties from file:/home/ec2-user/atlas/distro/target/apache-atlas-1.0.0-SNAPSHOT-bin/apache-atlas-1.0.0-SNAPSHOT/conf/atlas-application.properties (ApplicationProperties:91)
2018-07-25 00:06:16,512 WARN - [main:] ~ Attempting to create topic ATLAS_HOOK (AtlasTopicCreator:72)
2018-07-25 00:06:17,004 WARN - [main:] ~ Created topic ATLAS_HOOK with partitions 1 and replicas 1 (AtlasTopicCreator:119)
2018-07-25 00:06:17,004 WARN - [main:] ~ Attempting to create topic ATLAS_ENTITIES (AtlasTopicCreator:72)
2018-07-25 00:06:17,024 WARN - [main:] ~ Created topic ATLAS_ENTITIES with partitions 1 and replicas 1 (AtlasTopicCreator:119)
2018-07-25 01:49:45,147 DEBUG - [main:] ~ Calling API [ GET : api/atlas/v2/types/typedefs ] (AtlasBaseClient:319)
2018-07-25 01:49:45,147 DEBUG - [main:] ~ Attempting to configure HTTPS connection using client configuration (SecureClientUtils$4:221)
2018-07-25 01:49:45,166 INFO - [main:] ~ Unable to configure HTTPS connection from configuration. Leveraging JDK properties. (SecureClientUtils$4:240)
2018-07-25 01:49:45,269 DEBUG - [main:] ~ API https://mydns:21443/api/atlas/v2/types/typedefs?name=Dimension returned status 200 (AtlasBaseClient:337)
2018-07-25 01:49:45,270 DEBUG - [main:] ~ Calling API [ GET : api/atlas/v2/types/typedefs ] (AtlasBaseClient:319)
2018-07-25 01:49:45,271 DEBUG - [main:] ~ Attempting to configure HTTPS connection using client configuration (SecureClientUtils$4:221)
2018-07-25 01:49:45,291 INFO - [main:] ~ Unable to configure HTTPS connection from configuration. Leveraging JDK properties. (SecureClientUtils$4:240)
2018-07-25 01:49:45,450 DEBUG - [main:] ~ API https://mydns:21443/api/atlas/v2/types/typedefs?name=Log+Data returned status 200 (AtlasBaseClient:337)
2018-07-25 01:49:45,455 DEBUG - [main:] ~ Calling API [ POST : api/atlas/v2/entity ] <== AtlasEntityWithExtInfo{entity=AtlasEntity{AtlasStruct{typeName='DB', attributes=[owner:John ETL, createTime:1532483385453, name:Sales, description:sales database, locationuri:hdfs://host:8000/apps/warehouse/sales]}guid='-6466195619848', status=null, createdBy='null', updatedBy='null', createTime=null, updateTime=null, version=0, relationshipAttributes=[], classifications=[], },AtlasEntityExtInfo{referredEntities={}}} (AtlasBaseClient:319)
2018-07-25 01:49:45,455 DEBUG - [main:] ~ Attempting to configure HTTPS connection using client configuration (SecureClientUtils$4:221)
2018-07-25 01:49:45,474 INFO - [main:] ~ Unable to configure HTTPS connection from configuration. Leveraging JDK properties. (SecureClientUtils$4:240)
2018-07-25 01:49:33,256 Audit: myuser/10.160.189.35-10.160.189.35 performed request POST https://mydns:21443/api/atlas/v2/types/typedefs (10.160.187.181) at time 2018-07-25T01:49Z
2018-07-25 01:49:45,445 Audit: myuser/10.160.189.35-10.160.189.35 performed request GET https://mydns:21443/api/atlas/v2/types/typedefs?name=Log+Data (10.160.187.181) at time 2018-07-25T01:49Z
2018-07-25 01:49:45,678 Audit: myuser/10.160.189.35-10.160.189.35 performed request POST https://mydns:21443/api/atlas/v2/entity (10.160.187.181) at time 2018-07-25T01:49Z
The 2 topics are returned by this:
$KAFKA_HOME/bin/kafka-topics.sh --list --zookeeper localhost:2181
atlas' application.log does have this, not sure why:
2018-07-25 02:18:14,991 DEBUG - [NotificationHookConsumer thread-0:] ~ Give up sending metadata request since no node is available (NetworkClient$DefaultMetadataUpdater:625)
2018-07-25 02:18:15,018 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Initialize connection to node -1 for sending metadata request (NetworkClient$DefaultMetadataUpdater:644)
2018-07-25 02:18:15,018 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Initiating connection to node -1 at localhost:9027. (NetworkClient:496)
2018-07-25 02:18:15,018 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Connection with localhost/127.0.0.1 disconnected (Selector:345)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:73)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:309)
at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:229)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:134)
at java.lang.Thread.run(Thread.java:748)
2018-07-25 02:18:15,018 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Node -1 disconnected. (NetworkClient:463)
2018-07-25 02:18:15,018 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Give up sending metadata request since no node is available (NetworkClient$DefaultMetadataUpdater:625)
2018-07-25 02:18:15,092 DEBUG - [NotificationHookConsumer thread-0:] ~ Initialize connection to node -1 for sending metadata request (NetworkClient$DefaultMetadataUpdater:644)
2018-07-25 02:18:15,092 DEBUG - [NotificationHookConsumer thread-0:] ~ Initiating connection to node -1 at localhost:9027. (NetworkClient:496)
2018-07-25 02:18:15,092 DEBUG - [NotificationHookConsumer thread-0:] ~ Connection with localhost/127.0.0.1 disconnected (Selector:345)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:73)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:309)
at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:183)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:973)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.atlas.kafka.AtlasKafkaConsumer.receive(AtlasKafkaConsumer.java:63)
at org.apache.atlas.kafka.AtlasKafkaConsumer.receive(AtlasKafkaConsumer.java:55)
at org.apache.atlas.notification.NotificationHookConsumer$HookConsumer.doWork(NotificationHookConsumer.java:305)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-07-25 02:18:15,092 DEBUG - [NotificationHookConsumer thread-0:] ~ Node -1 disconnected. (NetworkClient:463)
2018-07-25 02:18:15,092 DEBUG - [NotificationHookConsumer thread-0:] ~ Give up sending metadata request since no node is available (NetworkClient$DefaultMetadataUpdater:625)
2018-07-25 02:18:15,119 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Initialize connection to node -1 for sending metadata request (NetworkClient$DefaultMetadataUpdater:644)
2018-07-25 02:18:15,119 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Initiating connection to node -1 at localhost:9027. (NetworkClient:496)
2018-07-25 02:18:15,119 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Connection with localhost/127.0.0.1 disconnected (Selector:345)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:73)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:309)
at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:229)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:134)
at java.lang.Thread.run(Thread.java:748)
2018-07-25 02:18:15,119 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Node -1 disconnected. (NetworkClient:463)
2018-07-25 02:18:15,119 DEBUG - [kafka-producer-network-thread | producer-1:] ~ Give up sending metadata request since no node is available (NetworkClient$DefaultMetadataUpdater:625)
This fixed it!
sed -i 's/atlas.kafka.bootstrap.servers=localhost:9027/atlas.kafka.bootstrap.servers=localhost:9092/' $ATLAS_HOME/conf/atlas-application.properties```

Apache CAMEL + HTTPS REST API (post)

Im a newbie to apache camle and
Lately Ive been trying to make a post request to a HTTPS Rest API.
I have gone through many posts and documentation but still I couldnt get a gist of this.
Please find my code below
**
from("timer:aTimer?period=20s")
.process(ex->ex.getIn().setBody(
"{\n" +
" \"userId\": 777,\n" +
" \"title\": \"sample\",\n" +
" \"body\": \"my body\"\n" +
" }"
))
.setHeader(Exchange.HTTP_METHOD,constant("POST"))
.setHeader(Exchange.CONTENT_TYPE,constant("application/json"))
.to("restlet:https://jsonplaceholder.typicode.com/posts")
.log("${body}");**
Whenever I run my application im getting the below error.
Started
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is starting
INFO ManagedManagementStrategy - JMX is enabled
INFO DefaultTypeConverter - Type converters loaded (core: 192, classpath: 14)
INFO DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
Mar 05, 2018 3:20:45 PM org.restlet.ext.httpclient.HttpClientHelper start
INFO: Starting the Apache HTTP client
INFO DefaultCamelContext - Route: route1 started and consuming from: timer://aTimer?period=20s
INFO DefaultCamelContext - Total 1 routes, of which 1 are started
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) started in 0.879 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is shutting down
INFO DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 300 seconds)
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 300 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=3018]
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 299 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=4020]
INFO DefaultShutdownStrategy - Waiting as there are still 1 inflight and pending exchanges to complete, timeout in 298 seconds. Inflights per route: [route1 = 1]
INFO DefaultShutdownStrategy - There are 1 inflight exchanges:
InflightExchange: [exchangeId=ID-ubuntu-Latitude-6430U-1520243444162-0-1, fromRouteId=route1, routeId=route1, nodeId=to1, elapsed=0, duration=5023]
Mar 05, 2018 3:20:51 PM org.restlet.ext.httpclient.internal.HttpMethodCall sendRequest
WARNING: An error occurred during the communication with the remote HTTP server.
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)
at sun.security.ssl.InputRecord.read(InputRecord.java:527)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at org.apache.http.conn.ssl.SSLSocketFactory.createLayeredSocket(SSLSocketFactory.java:573)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:557)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:414)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:134)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:339)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:363)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
WARN TimerConsumer - Error processing exchange. Exchange[ID-ubuntu-Latitude-6430U-1520243444162-0-1]. Caused by: [org.apache.camel.component.restlet.RestletOperationException - Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server]
org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
at org.apache.camel.component.restlet.RestletProducer.populateRestletProducerException(RestletProducer.java:304)
at org.apache.camel.component.restlet.RestletProducer$1.handle(RestletProducer.java:190)
at org.restlet.engine.adapter.ClientAdapter$1.handle(ClientAdapter.java:90)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:371)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
ERROR DefaultErrorHandler - Failed delivery for (MessageId: ID-ubuntu-Latitude-6430U-1520243444162-0-2 on ExchangeId: ID-ubuntu-Latitude-6430U-1520243444162-0-1). Exhausted after delivery attempt: 1 caught: org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[route1 ] [route1 ] [timer://aTimer?period=20s ] [ 5321]
[route1 ] [process1 ] [Processor#0x33ae3bf8 ] [ 4]
[route1 ] [setHeader1 ] [setHeader[CamelHttpMethod] ] [ 0]
[route1 ] [setHeader2 ] [setHeader[Content-Type] ] [ 0]
[route1 ] [to1 ] [restlet:https://jsonplaceholder.typicode.com/443:posts ] [ 5308]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
org.apache.camel.component.restlet.RestletOperationException: Restlet operation failed invoking https://jsonplaceholder.typicode.com:80/443:posts with statusCode: 1001 /n responseBody:HTTPS/1.1 - Communication Error (1001) - The connector failed to complete the communication with the server
at org.apache.camel.component.restlet.RestletProducer.populateRestletProducerException(RestletProducer.java:304)
at org.apache.camel.component.restlet.RestletProducer$1.handle(RestletProducer.java:190)
at org.restlet.engine.adapter.ClientAdapter$1.handle(ClientAdapter.java:90)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:371)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:81)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:119)
at org.restlet.Client.handle(Client.java:153)
at org.restlet.Restlet.handle(Restlet.java:342)
at org.restlet.Restlet.handle(Restlet.java:355)
at org.apache.camel.component.restlet.RestletProducer.process(RestletProducer.java:179)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Mar 05, 2018 3:20:52 PM org.restlet.ext.httpclient.HttpClientHelper stop
INFO: Stopping the HTTP client
INFO DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: timer://aTimer?period=20s
INFO DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 3 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) uptime 7.927 seconds
INFO DefaultCamelContext - Apache Camel 2.20.1 (CamelContext: camel-1) is shutdown in 3.048 seconds
Please help me.. I've also tried to use Apache HTTP4 component but still no luck.

how to insert json data from hdfs to mysql using sqoop?

I have loaded JSON data to my HDFS, I created the table with required columns in MySQL database as follows.
How to create table with row formatter for accepting JSON?
My HDFS data
{
"Employees" : [
{
"userId":"rirani",
"jobTitleName":"Developer",
"firstName":"Romin",
"lastName":"Irani",
"preferredFullName":"Romin Irani",
"employeeCode":"E1",
"region":"CA",
"phoneNumber":"408-1234567",
"emailAddress":"romin.k.irani#gmail.com"
},
{
"userId":"nirani",
"jobTitleName":"Developer",
"firstName":"Neil",
"lastName":"Irani",
"preferredFullName":"Neil Irani",
"employeeCode":"E2",
"region":"CA",
"phoneNumber":"408-1111111",
"emailAddress":"neilrirani#gmail.com"
},
{
"userId":"thanks",
"jobTitleName":"Program Directory",
"firstName":"Tom",
"lastName":"Hanks",
"preferredFullName":"Tom Hanks",
"employeeCode":"E3",
"region":"CA",
"phoneNumber":"408-2222222",
"emailAddress":"tomhanks#gmail.com"
}
]
}
My SQL table structure
mysql> create table employee(userid int,jobTitleName varchar(20),firstName varchar(20),lastName varchar(20),preferrredFullName varchar(20),employeeCode varchar(20),region varchar(20),phoneNumber varchar(20), emailAddress varchar(20),modifiedDate timestamp DEFAULT CURRENT_TIMESTAMP);
mysql> desc employee;
+--------------------+-------------+------+-----+-------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+-------------+------+-----+-------------------+-------+
| userid | int(11) | YES | | NULL | |
| jobTitleName | varchar(20) | YES | | NULL | |
| firstName | varchar(20) | YES | | NULL | |
| lastName | varchar(20) | YES | | NULL | |
| preferrredFullName | varchar(20) | YES | | NULL | |
| employeeCode | varchar(20) | YES | | NULL | |
| region | varchar(20) | YES | | NULL | |
| phoneNumber | varchar(20) | YES | | NULL | |
| emailAddress | varchar(20) | YES | | NULL | |
| modifiedDate | timestamp | NO | | CURRENT_TIMESTAMP | |
+--------------------+-------------+------+-----+-------------------+-------+
10 rows in set (0.00 sec)
I am trying to load data from my HDFS to MySQL for the above table using sqoop export as follows
sqoop export --connect jdbc:mysql://localhost/emp_scheme --username root --password adithyan --table employee --export-dir /user/adithyan/filesystem/employee.txt
it has end up with exception as follows
17/02/18 19:35:35 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
17/02/18 19:35:35 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/02/18 19:35:35 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
17/02/18 19:35:35 INFO tool.CodeGenTool: Beginning code generation
17/02/18 19:35:36 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
17/02/18 19:35:36 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
17/02/18 19:35:36 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/adithyan/hadoop_dir/hadoop-1.2.1
Note: /tmp/sqoop-adithyan/compile/35afadf151a1dd1626a3658577cbc2dd/employee.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/02/18 19:35:41 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-adithyan/compile/35afadf151a1dd1626a3658577cbc2dd/employee.jar
17/02/18 19:35:41 INFO mapreduce.ExportJobBase: Beginning export of employee
17/02/18 19:35:45 INFO input.FileInputFormat: Total input paths to process : 1
17/02/18 19:35:45 INFO input.FileInputFormat: Total input paths to process : 1
17/02/18 19:35:45 INFO util.NativeCodeLoader: Loaded the native-hadoop library
17/02/18 19:35:45 WARN snappy.LoadSnappy: Snappy native library not loaded
17/02/18 19:35:46 INFO mapred.JobClient: Running job: job_201702181051_0002
17/02/18 19:35:47 INFO mapred.JobClient: map 0% reduce 0%
17/02/18 19:36:17 INFO mapred.JobClient: Task Id : attempt_201702181051_0002_m_000000_0, Status : FAILED
java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: Can't parse input data: '"firstName":"Tom"'
at employee.__loadFromFields(employee.java:596)
at employee.parse(employee.java:499)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: ""firstName":"Tom""
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:569)
at java.lang.Integer.valueOf(Integer.java:766)
at employee.__loadFromFields(employee.java:548)
... 12 more
17/02/18 19:36:18 INFO mapred.JobClient: Task Id : attempt_201702181051_0002_m_000001_0, Status : FAILED
java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: Can't parse input data: '{'
at employee.__loadFromFields(employee.java:596)
at employee.parse(employee.java:499)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: "{"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.valueOf(Integer.java:766)
at employee.__loadFromFields(employee.java:548)
... 12 more
17/02/18 19:36:29 INFO mapred.JobClient: Task Id : attempt_201702181051_0002_m_000000_1, Status : FAILED
java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: Can't parse input data: '"firstName":"Tom"'
at employee.__loadFromFields(employee.java:596)
at employee.parse(employee.java:499)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: ""firstName":"Tom""
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:569)
at java.lang.Integer.valueOf(Integer.java:766)
at employee.__loadFromFields(employee.java:548)
... 12 more
17/02/18 19:36:29 INFO mapred.JobClient: Task Id : attempt_201702181051_0002_m_000001_1, Status : FAILED
java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: Can't parse input data: '{'
at employee.__loadFromFields(employee.java:596)
at employee.parse(employee.java:499)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: "{"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.valueOf(Integer.java:766)
at employee.__loadFromFields(employee.java:548)
... 12 more
17/02/18 19:36:42 INFO mapred.JobClient: Task Id : attempt_201702181051_0002_m_000000_2, Status : FAILED
java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: Can't parse input data: '"firstName":"Tom"'
at employee.__loadFromFields(employee.java:596)
at employee.parse(employee.java:499)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: ""firstName":"Tom""
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:569)
at java.lang.Integer.valueOf(Integer.java:766)
at employee.__loadFromFields(employee.java:548)
... 12 more
17/02/18 19:36:42 INFO mapred.JobClient: Task Id : attempt_201702181051_0002_m_000001_2, Status : FAILED
java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: Can't parse input data: '{'
at employee.__loadFromFields(employee.java:596)
at employee.parse(employee.java:499)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: "{"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.valueOf(Integer.java:766)
at employee.__loadFromFields(employee.java:548)
Can somebody help me on this?
you may have to look at multiple options..
JSON_SET/REPLACE/INSERT - these options may not be directly supported by sqoop yet.
Another options is to pre-process data using pig then stage the data in HDFS before sqooping to RDBMS.

Resources