Hiveserver not found in hadoop - hadoop

When I starting serverHive2 in cmd ubunto I have:
/local/hive/lib/hive-jdbc-2.1.0-standalone.jar!/hive-webapps/hiveserver2/ to /tmp/jetty-0.0.0.0-10002-hiveserver2-_-any-/webapp
19/04/25 20:03:42 [Thread-12]: WARN thrift.ThriftCLIService: XSRF filter disabled
19/04/25 20:03:42 [Thread-12]: INFO server.Server: jetty-7.6.0.v20120127
19/04/25 20:03:42 [Thread-12]: INFO handler.ContextHandler: started o.e.j.s.ServletContextHandler{/,null}
19/04/25 20:03:42 [Thread-12]: INFO server.AbstractConnector: Started SelectChannelConnector#0.0.0.0:10001
19/04/25 20:03:42 [Thread-12]: INFO thrift.ThriftCLIService: Started ThriftHttpCLIService in http mode on port 10001 path=/cliservice/* with 5...500 worker threads
19/04/25 20:03:42 [main]: INFO handler.ContextHandler: started o.e.j.w.WebAppContext{/,file:/tmp/jetty-0.0.0.0-10002-hiveserver2-_-any-/webapp/},jar:file:/usr/local/hive/lib/hive-jdbc-2.1.0-standalone.jar!/hive-webapps/hiveserver2
19/04/25 20:03:42 [main]: INFO handler.ContextHandler: started o.e.j.s.ServletContextHandler{/static,jar:file:/usr/local/hive/lib/hive-jdbc-2.1.0-standalone.jar!/hive-webapps/static}
19/04/25 20:03:42 [main]: INFO server.AbstractConnector: Started SelectChannelConnector#0.0.0.0:10002
19/04/25 20:03:42 [main]: INFO http.HttpServer: Started HttpServer[hiveserver2] on port 10002
shell of ubunto, It stays blocked and does not connect in beeline
beeline> !connect jdbc:hive2://localhost:10002/default
19/04/25 20:07:08 [main]: WARN jdbc.HiveConnection: Failed to connect to localhost:10002
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10002/default: Invalid status 72 (state=08S01,code=0)

javier#javier-VirtualBox:/usr/local/hive/bin$ sudo netstat -nautlp | grep 10002
tcp 0 0 0.0.0.0:10002 0.0.0.0:* ESCUCHAR 19704/java
Where is hiveserver2 log file?

Related

hiveserver2 is shutting down frequently in hadoop cluster

Facing this issue from quite sometime now and not able to track the reason why is it happening.
Whenever we start hiveserver2 using command ->
./hiveserver2 &
It starts and stays up for sometime but then shuts down. In hive logs it does show the following error while hiveserver is up and running.
2018-03-12 04:44:57,029 ERROR [HiveServer2-Handler-Pool: Thread-33]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Erro
r occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 4 more
2018-03-12 04:45:55,361 INFO [main]: SessionState (SessionState.java:printInfo(951)) -
Logging initialized using configuration in file:/usr/local/hive/conf/hive-log4j.properties
But I'm not really sure that the shutting down of hiveserver is due to above error as it keeps on running for hours before shutting down.
Following are the hive logs that comes when hiveserver shuts down
2018-03-12 04:46:25,285 INFO [main]: ql.Driver (SessionState.java:printInfo(951)) - Stage-Stage-1: Map: 4 Reduce: 1 Cumulative CPU
: 18.09 sec HDFS Read: 763046 HDFS Write: 2217 SUCCESS
2018-03-12 04:46:25,286 INFO [main]: ql.Driver (SessionState.java:printInfo(951)) - Total MapReduce CPU Time Spent: 18 seconds 90 mse
c
2018-03-12 04:46:25,286 INFO [main]: ql.Driver (SessionState.java:printInfo(951)) - OK
2018-03-12 04:46:25,286 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apach
e.hadoop.hive.ql.Driver>
2018-03-12 04:46:25,295 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=152082998
5286 end=1520829985295 duration=9 from=org.apache.hadoop.hive.ql.Driver>
2018-03-12 04:46:25,295 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.run start=15208299614
77 end=1520829985295 duration=23818 from=org.apache.hadoop.hive.ql.Driver>
2018-03-12 04:46:25,304 INFO [main]: CliDriver (SessionState.java:printInfo(951)) - Time taken: 23.818 seconds
2018-03-12 04:46:25,304 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apach
e.hadoop.hive.ql.Driver>
2018-03-12 04:46:25,305 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=152082998
5304 end=1520829985305 duration=1 from=org.apache.hadoop.hive.ql.Driver>
2018-03-12 04:46:36,351 INFO [Thread-9]: server.HiveServer2 (HiveServer2.java:stop(305)) - Shutting down HiveServer2
2018-03-12 04:46:36,351 INFO [Thread-9]: thrift.ThriftCLIService (ThriftCLIService.java:stop(201)) - Thrift server has stopped
2018-03-12 04:46:36,351 INFO [Thread-9]: service.AbstractService (AbstractService.java:stop(125)) - Service:ThriftBinaryCLIService is
stopped.
2018-03-12 04:46:36,351 INFO [Thread-9]: service.AbstractService (AbstractService.java:stop(125)) - Service:OperationManager is stopp
ed.
2018-03-12 04:46:36,351 INFO [Thread-9]: service.AbstractService (AbstractService.java:stop(125)) - Service:SessionManager is stopped
.
2018-03-12 04:46:36,351 INFO [Thread-3]: server.HiveServer2 (HiveStringUtils.java:run(709)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down HiveServer2 at SERVER-HOSTNAME/192.168.***.**
************************************************************/
2018-03-12 04:46:46,352 WARN [Thread-9]: service.CompositeService (SessionManager.java:cleanupLoggingRootDir(213)) - Failed to cleanu
p root dir of HS2 logging: /usr/local/hive/log
java.io.FileNotFoundException: File does not exist: /usr/local/hive/log
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2275)
at org.apache.hive.service.cli.session.SessionManager.cleanupLoggingRootDir(SessionManager.java:211)
at org.apache.hive.service.cli.session.SessionManager.stop(SessionManager.java:205)
at org.apache.hive.service.CompositeService.stop(CompositeService.java:102)
at org.apache.hive.service.CompositeService.stop(CompositeService.java:92)
at org.apache.hive.service.cli.CLIService.stop(CLIService.java:165)
at org.apache.hive.service.CompositeService.stop(CompositeService.java:102)
at org.apache.hive.service.CompositeService.stop(CompositeService.java:92)
at org.apache.hive.service.server.HiveServer2.stop(HiveServer2.java:307)
at org.apache.hive.service.server.HiveServer2$1.run(HiveServer2.java:107)
2018-03-12 04:46:46,353 INFO [Thread-9]: service.AbstractService (AbstractService.java:stop(125)) - Service:CLIService is stopped.
2018-03-12 04:46:46,353 INFO [Thread-9]: service.AbstractService (AbstractService.java:stop(125)) - Service:HiveServer2 is stopped.
2018-03-12 04:51:07,336 INFO [main]: SessionState (SessionState.java:printInfo(951)) -
Logging initialized using configuration in file:/usr/local/hive/conf/hive-log4j.properties
If the issue is actually because of...
ERROR [HiveServer2-Handler-Pool: Thread-33]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Erro
r occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
...then here are my hive-site.xml settings which are related to it as mentioned in many other related posts.
<name>hive.server2.authentication</name>
<value>PAM</value>
<name>hive.server2.authentication.pam.services</name>
<value>sshd,sudo</value>
<name>hive.server2.thrift.sasl.qop</name>
<value>auth</value>
<name>hive.metastore.sasl.enabled</name>
<value>false</value>
EDITS
Tried starting hiveserver after changing hive.server2.authentication from PAM to NONE
But Again hiveserver started with the following error
ERROR [HiveServer2-Handler-Pool: Thread-31]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
also when trying to connect to beeline it throws connection exception as expected,
bin$ ./beeline
Beeline version 1.2.2 by Apache Hive
beeline> !connect jdbc:hive2://192.168.XXX.XX:XXX7 myuser myp#sw0rd
Connecting to jdbc:hive2://192.168.XXX.XX:XXX7
Error: Could not open client transport with JDBC Uri: jdbc:hive2://192.168.203.XXX.XX:XXX7: java.net.ConnectException: Connection timed out (Connection timed out) (state=08S01,code=0)
0: jdbc:hive2://192.168.XXX.XX:XXX7 (closed)>
0: jdbc:hive2://192.168.XXX.XX:XXX7 (closed)>
while ps -ef | grep hive shows that hiveserver is up
ps -ef | grep hive
hduser 30902 30165 1 05:39 pts/1 00:00:15 /data/apps/jdk/bin/java -Xmx4000m -Djava.library.path=/usr/local/hadoop/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/usr/local/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/local/hadoop -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,console -Djava.library.path=/usr/local/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/local/hive/lib/hive-service-1.2.2.jar org.apache.hive.service.server.HiveServer2
HiveServer2 documentation mentions that when using PAM authentication mode, if the user's password has expired, it will cause the server to go down. Please check if that's the case and you can also try setting hive.server2.authentication to NONE and check if that lets you connect to the server.
a time out on a connection may be just because it's not listening at all on the port, or not authorized to be connected.
netstat -na to check the port listening
/etc/security/access.conf
or iptable -L
?

failed: semanticexception the current builtin authoization in hive is incomplete and disabled

I start the sentry service(without kerberos, ad or ldap), and config hive, impala with sentry.
Then I used beeline to connect hive2(beeline> !connect jdbc:hive2://),
and ran the command "create role test_role", but it throwed an error.
What could cause it happen?
The following is the log:
[root#cdh1 ~]# su - hive -s /bin/bash
[hive#cdh1 ~]$ beeline
Beeline version 0.13.1-cdh5.3.0 by Apache Hive
beeline> !connect jdbc:hive2://
scan complete in 3ms
Connecting to jdbc:hive2://
Enter username for jdbc:hive2://:
Enter password for jdbc:hive2://:
16/02/19 13:46:20 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
16/02/19 13:46:20 INFO hive.metastore: Trying to connect to metastore with URI thrift://cdh1:9083
16/02/19 13:46:20 INFO hive.metastore: Connected to metastore.
16/02/19 13:46:21 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
16/02/19 13:46:21 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
16/02/19 13:46:21 INFO service.CompositeService: HiveServer2: Background operation thread pool size: 100
16/02/19 13:46:21 INFO service.CompositeService: HiveServer2: Background operation thread wait queue size: 100
16/02/19 13:46:21 INFO service.CompositeService: HiveServer2: Background operation thread keepalive time: 10
16/02/19 13:46:21 INFO service.AbstractService: Service:OperationManager is inited.
16/02/19 13:46:21 INFO service.AbstractService: Service:LogManager is inited.
16/02/19 13:46:21 INFO service.AbstractService: Service:SessionManager is inited.
16/02/19 13:46:21 INFO service.AbstractService: Service:CLIService is inited.
16/02/19 13:46:21 INFO service.AbstractService: Service:OperationManager is started.
16/02/19 13:46:21 INFO service.AbstractService: Service:LogManager is started.
16/02/19 13:46:21 INFO service.AbstractService: Service:SessionManager is started.
16/02/19 13:46:21 INFO service.AbstractService: Service:CLIService is started.
16/02/19 13:46:21 INFO hive.metastore: Trying to connect to metastore with URI thrift://cdh1:9083
16/02/19 13:46:21 INFO hive.metastore: Connected to metastore.
16/02/19 13:46:21 INFO thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V6
16/02/19 13:46:21 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
16/02/19 13:46:21 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
Connected to: Apache Hive (version 0.13.1-cdh5.3.0)
Driver: Hive JDBC (version 0.13.1-cdh5.3.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://>
0: jdbc:hive2://> create role test_role;
16/02/19 13:46:32 INFO log.LogManager: Operation log size: 131072
16/02/19 13:46:32 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/02/19 13:46:32 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/02/19 13:46:32 INFO parse.ParseDriver: Parsing command: create role test_role
16/02/19 13:46:32 INFO parse.ParseDriver: Parse Completed
16/02/19 13:46:32 INFO log.PerfLogger: </PERFLOG method=parse start=1455860792301 end=1455860792688 duration=387 from=org.apache.hadoop.hive.ql.Driver>
16/02/19 13:46:32 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
FAILED: SemanticException The current builtin authorization in Hive is incomplete and disabled.
16/02/19 13:46:32 ERROR ql.Driver: FAILED: SemanticException The current builtin authorization in Hive is incomplete and disabled.
org.apache.hadoop.hive.ql.parse.SemanticException: The current builtin authorization in Hive is incomplete and disabled.
at org.apache.hadoop.hive.ql.parse.authorization.RestrictedHiveAuthorizationTaskFactoryImpl.raiseAuthError(RestrictedHiveAuthorizationTaskFactoryImpl.java:140)
at org.apache.hadoop.hive.ql.parse.authorization.RestrictedHiveAuthorizationTaskFactoryImpl.createCreateRoleTask(RestrictedHiveAuthorizationTaskFactoryImpl.java:47)
at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeCreateRole(DDLSemanticAnalyzer.java:559)
at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:455)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:206)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1026)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1019)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:100)
at org.apache.hive.service.cli.operation.SQLOperation.run(SQLOperation.java:173)
at org.apache.hive.service.cli.session.HiveSessionImpl.runOperationWithLogCapture(HiveSessionImpl.java:715)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:370)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:357)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:237)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:392)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:232)
at org.apache.hive.beeline.Commands.execute(Commands.java:736)
at org.apache.hive.beeline.Commands.sql(Commands.java:657)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:910)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:772)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:734)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:469)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:452)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
16/02/19 13:46:32 INFO log.PerfLogger: </PERFLOG method=compile start=1455860792263 end=1455860792747 duration=484 from=org.apache.hadoop.hive.ql.Driver>
16/02/19 13:46:32 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/02/19 13:46:32 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1455860792747 end=1455860792747 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/02/19 13:46:32 WARN thrift.ThriftCLIService: Error executing statement:
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException The current builtin authorization in Hive is incomplete and disabled.
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:102)
at org.apache.hive.service.cli.operation.SQLOperation.run(SQLOperation.java:173)
at org.apache.hive.service.cli.session.HiveSessionImpl.runOperationWithLogCapture(HiveSessionImpl.java:715)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:370)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:357)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:237)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:392)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:232)
at org.apache.hive.beeline.Commands.execute(Commands.java:736)
at org.apache.hive.beeline.Commands.sql(Commands.java:657)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:910)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:772)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:734)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:469)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:452)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Error: Error while compiling statement: FAILED: SemanticException The current builtin authorization in Hive is incomplete and disabled. (state=42000,code=40000)
0: jdbc:hive2://>
"Enter username for jdbc:hive2://:" prompt is empty.
You need to provide the username of the sentry admin, one of the sentry.metastore.service.users values.

Running Spark on the slave node (YARN) doesn't work

I can run SparkPi example on the master node, but when I try the same command
"spark-submit --class SparkPi --master yarn-client sparkpi.jar 10"
on the slave node, I got an error:
2015-05-19 14:05:44,881 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - Changing view acls to: maintainer
2015-05-19 14:05:44,886 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - Changing modify acls to: maintainer
2015-05-19 14:05:44,887 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(maintainer); users with modify permissions: Set(maintainer)
2015-05-19 14:05:45,389 INFO [sparkDriver-akka.actor.default-dispatcher-4] slf4j.Slf4jLogger (Slf4jLogger.scala:applyOrElse(80)) - Slf4jLogger started
2015-05-19 14:05:45,443 INFO [sparkDriver-akka.actor.default-dispatcher-4] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Starting remoting
2015-05-19 14:05:45,641 INFO [sparkDriver-akka.actor.default-dispatcher-3] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting started; listening on addresses :[akka.tcp://sparkDriver#slave2.com:33055]
2015-05-19 14:05:45,644 INFO [sparkDriver-akka.actor.default-dispatcher-3] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting now listens on addresses: [akka.tcp://sparkDriver#slave2.com:33055]
2015-05-19 14:05:45,653 INFO [main] util.Utils (Logging.scala:logInfo(59)) - Successfully started service 'sparkDriver' on port 33055.
2015-05-19 14:05:45,674 INFO [main] spark.SparkEnv (Logging.scala:logInfo(59)) - Registering MapOutputTracker
2015-05-19 14:05:45,688 INFO [main] spark.SparkEnv (Logging.scala:logInfo(59)) - Registering BlockManagerMaster
2015-05-19 14:05:45,707 INFO [main] storage.DiskBlockManager (Logging.scala:logInfo(59)) - Created local directory at /tmp/spark-local-20150519140545-c81b
2015-05-19 14:05:45,712 INFO [main] storage.MemoryStore (Logging.scala:logInfo(59)) - MemoryStore started with capacity 265.4 MB
2015-05-19 14:05:46,205 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-05-19 14:05:46,408 INFO [main] spark.HttpFileServer (Logging.scala:logInfo(59)) - HTTP File server directory is /tmp/spark-e95a2b5b-efea-41eb-93b9-0a9f7d6f6701
2015-05-19 14:05:46,413 INFO [main] spark.HttpServer (Logging.scala:logInfo(59)) - Starting HTTP Server
2015-05-19 14:05:46,477 INFO [main] server.Server (Server.java:doStart(272)) - jetty-8.y.z-SNAPSHOT
2015-05-19 14:05:46,499 INFO [main] server.AbstractConnector (AbstractConnector.java:doStart(338)) - Started SocketConnector#0.0.0.0:52737
2015-05-19 14:05:46,500 INFO [main] util.Utils (Logging.scala:logInfo(59)) - Successfully started service 'HTTP file server' on port 52737.
2015-05-19 14:05:46,790 INFO [main] server.Server (Server.java:doStart(272)) - jetty-8.y.z-SNAPSHOT
2015-05-19 14:05:46,805 INFO [main] server.AbstractConnector (AbstractConnector.java:doStart(338)) - Started SelectChannelConnector#0.0.0.0:4040
2015-05-19 14:05:46,805 INFO [main] util.Utils (Logging.scala:logInfo(59)) - Successfully started service 'SparkUI' on port 4040.
2015-05-19 14:05:46,808 INFO [main] ui.SparkUI (Logging.scala:logInfo(59)) - Started SparkUI at http://slave2.com:4040
2015-05-19 14:05:47,058 INFO [main] spark.SparkContext (Logging.scala:logInfo(59)) - Added JAR file:/home/maintainer/myjars/sparkpi.jar at http://[ip]:52737/jars/sparkpi.jar with timestamp 1432033547057
2015-05-19 14:05:47,190 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8032
2015-05-19 14:09:45,861 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8032
**2015-05-19 14:09:47,067 INFO [main] ipc.Client (Client.java:handleConnectionFailure(842)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-05-19 14:09:48,068 INFO [main] ipc.Client (Client.java:handleConnectionFailure(842)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
...**
Aside from specifying yarn.resourcemanager.hostname property in yarn-site.xml, it's also necessary to propagate configuration files to workers.
It might be done with this line (before running spark-submit):
export SPARK_YARN_DIST_FILES=$(ls $HADOOP_CONF_DIR* | sed 's#^#file://#g' | tr '\n' ',' | sed 's/,$//')
If everything's configured correctly, you'll see RM hostname instead of 0.0.0.0 in this line:
2015-05-19 14:05:47,190 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8032
Exporting correct values for HADOOP_CONF_DIR fixed the issue.
export HADOOP_CONF_DIR=/your-path/hadoop/conf

Spark ERROR network.ConnectionManager:

I'm getting the below error while submiting spark submit query. can any one please suggest how to resolve this issue
15/02/18 12:06:17 INFO network.ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl#5173169
java.nio.channels.CancelledKeyException
at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:386)
at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)
15/02/18 12:06:17 ERROR network.ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(bkcttplpd037.verizon.com,39010) not found
15/02/18 12:06:17 INFO network.ConnectionManager: Key not valid ? sun.nio.ch.SelectionKeyImpl#7a73a542
15/02/18 12:06:17 INFO network.ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl#7a73a542
java.nio.channels.CancelledKeyException
at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:310)
at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)
15/02/18 12:06:18 INFO spark.MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
15/02/18 12:06:18 INFO network.ConnectionManager: Selector thread was interrupted!
15/02/18 12:06:18 INFO network.ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(abc02.com,49740)
15/02/18 12:06:18 ERROR network.ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(abc01.com,49740) not found
15/02/18 12:06:18 WARN network.ConnectionManager: All connections not cleaned up
15/02/18 12:06:18 INFO network.ConnectionManager: ConnectionManager stopped
15/02/18 12:06:18 INFO storage.MemoryStore: MemoryStore cleared
15/02/18 12:06:18 INFO storage.BlockManager: BlockManager stopped
15/02/18 12:06:18 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
15/02/18 12:06:18 INFO spark.SparkContext: Successfully stopped SparkContext
15/02/18 12:06:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.15/02/18 12:06:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.

HMaster getting Aborted at startup

Hmaster getting aborted after starting ./start-hbase.sh in hbase-0.96.0 with hadoop 2.2.0.
Tried with hbase-0.94.16 and hbase-0.98 but same result. Hmaster aborts as soon as it starts. Even tried with replacing jars in hbase lib manually as well as using maven but the issue is unresolved. Is there any other solution?
Below is the corresponding hbase-hadoop-master-hadoop-master.log...
2014-02-24 10:11:27,078 INFO [Replication.RpcServer.handler=2,port=60000] ipc.RpcServer: Replication.RpcServer.handler=2,port=60000: starting
2014-02-24 10:11:27,565 INFO [RpcServer.handler=23,port=60000] ipc.RpcServer: RpcServer.handler=23,port=60000: starting
2014-02-24 10:11:27,970 INFO [master:hadoop-master:60000] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-02-24 10:11:28,172 INFO [master:hadoop-master:60000] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-02-24 10:11:28,177 INFO [master:hadoop-master:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2014-02-24 10:11:28,177 INFO [master:hadoop-master:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-02-24 10:11:28,191 INFO [master:hadoop-master:60000] http.HttpServer: Jetty bound to port 60010
2014-02-24 10:11:28,191 INFO [master:hadoop-master:60000] mortbay.log: jetty-6.1.26
2014-02-24 10:11:29,227 INFO [master:hadoop-master:60000] mortbay.log: Started SelectChannelConnector#0.0.0.0:60010
2014-02-24 10:11:29,623 INFO [master:hadoop-master:60000] master.ActiveMasterManager: Registered Active Master=hadoop-master.payoda.com,60000,1393236677609
2014-02-24 10:11:29,629 INFO [master:hadoop-master:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2014-02-24 10:11:29,851 DEBUG [main-EventThread] master.ActiveMasterManager: A master is now available
2014-02-24 10:11:30,537 INFO [master:hadoop-master:60000] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-02-24 10:11:30,800 DEBUG [master:hadoop-master:60000] util.FSTableDescriptors: Current tableInfoPath = hdfs://hadoop-master:9000/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000001
2014-02-24 10:11:30,821 DEBUG [master:hadoop-master:60000] util.FSTableDescriptors: TableInfo already exists.. Skipping creation
2014-02-24 10:11:30,944 INFO [master:hadoop-master:60000] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2014-02-24 10:11:30,950 INFO [master:hadoop-master:60000] master.SplitLogManager: Timeout=120000, unassigned timeout=180000, distributedLogReplay=false
2014-02-24 10:11:30,956 INFO [master:hadoop-master:60000] master.SplitLogManager: Found 0 orphan tasks and 0 rescan nodes
2014-02-24 10:11:31,000 INFO [master:hadoop-master:60000] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.14.35:2181 sessionTimeout=90000 watcher=hconnection-0x4a867fad
2014-02-24 10:11:31,012 INFO [master:hadoop-master:60000-SendThread(hadoop-master.payoda.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server hadoop-master.payoda.com/192.168.14.35:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
2014-02-24 10:11:31,617 INFO [master:hadoop-master:60000-SendThread(hadoop-master.payoda.com:2181)] zookeeper.ClientCnxn: Socket connection established to hadoop-master.payoda.com/192.168.14.35:2181, initiating session
2014-02-24 10:11:31,617 INFO [master:hadoop-master:60000] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4a867fad connecting to ZooKeeper ensemble=192.168.14.35:2181
2014-02-24 10:11:31,620 INFO [master:hadoop-master:60000-SendThread(hadoop-master.payoda.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server hadoop-master.payoda.com/192.168.14.35:2181, sessionid = 0x1446360aa4a0001, negotiated timeout = 90000
2014-02-24 10:11:31,640 DEBUG [master:hadoop-master:60000] catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker#3eaa3e5b
**2014-02-24 10:11:31,684 FATAL [master:hadoop-master:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta**
at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:292)
at org.apache.hadoop.hbase.zookeeper.ZKTable.populateTableStates(ZKTable.java:82)
at org.apache.hadoop.hbase.zookeeper.ZKTable.<init>(ZKTable.java:69)
at org.apache.hadoop.hbase.master.AssignmentManager.<init>(AssignmentManager.java:281)
at org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:677)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:809)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
at java.lang.Thread.run(Thread.java:662)
2014-02-24 10:11:31,684 INFO [master:hadoop-master:60000] master.HMaster: Aborting
2014-02-24 10:11:31,711 DEBUG [master:hadoop-master:60000] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker#3eaa3e5b
2014-02-24 10:11:31,712 DEBUG [master:hadoop-master:60000] master.HMaster: Stopping service threads
2014-02-24 10:11:31,712 INFO [master:hadoop-master:60000] ipc.RpcServer: Stopping server on 60000
2014-02-24 10:11:31,712 INFO [RpcServer.handler=15,port=60000] ipc.RpcServer: RpcServer.handler=15,port=60000: exiting
2014-02-24 10:11:31,712 INFO [RpcServer.handler=23,port=60000] ipc.RpcServer: RpcServer.handler=23,port=60000: exiting
2014-02-24 10:11:32,129 INFO [master:hadoop-master:60000] master.HMaster: Stopping infoServer
2014-02-24 10:11:32,138 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2014-02-24 10:11:32,138 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2014-02-24 10:11:32,304 INFO [hadoop-master.payoda.com,60000,1393236677609.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor: hadoop-master.payoda.com,60000,1393236677609.splitLogManagerTimeoutMonitor exiting
2014-02-24 10:11:32,304 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping
2014-02-24 10:11:32,304 INFO [Replication.RpcServer.handler=2,port=60000] ipc.RpcServer: Replication.RpcServer.handler=2,port=60000: exiting
2014-02-24 10:11:32,304 INFO [Replication.RpcServer.handler=1,port=60000] ipc.RpcServer: Replication.RpcServer.handler=1,port=60000: exiting
2014-02-24 10:11:32,304 INFO [Replication.RpcServer.handler=0,port=60000] ipc.RpcServer: Replication.RpcServer.handler=0,port=60000: exiting
2014-02-24 10:11:32,304 INFO [RpcServer.handler=29,port=60000] ipc.RpcServer: RpcServer.handler=29,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=28,port=60000] ipc.RpcServer: RpcServer.handler=28,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=27,port=60000] ipc.RpcServer: RpcServer.handler=27,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=26,port=60000] ipc.RpcServer: RpcServer.handler=26,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=25,port=60000] ipc.RpcServer: RpcServer.handler=25,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=24,port=60000] ipc.RpcServer: RpcServer.handler=24,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=22,port=60000] ipc.RpcServer: RpcServer.handler=22,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=21,port=60000] ipc.RpcServer: RpcServer.handler=21,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=20,port=60000] ipc.RpcServer: RpcServer.handler=20,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=19,port=60000] ipc.RpcServer: RpcServer.handler=19,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=18,port=60000] ipc.RpcServer: RpcServer.handler=18,port=60000: exiting
2014-02-24 10:11:32,305 INFO [RpcServer.handler=17,port=60000] ipc.RpcServer: RpcServer.handler=17,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=16,port=60000] ipc.RpcServer: RpcServer.handler=16,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=14,port=60000] ipc.RpcServer: RpcServer.handler=14,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=13,port=60000] ipc.RpcServer: RpcServer.handler=13,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=12,port=60000] ipc.RpcServer: RpcServer.handler=12,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=11,port=60000] ipc.RpcServer: RpcServer.handler=11,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=10,port=60000] ipc.RpcServer: RpcServer.handler=10,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=9,port=60000] ipc.RpcServer: RpcServer.handler=9,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=8,port=60000] ipc.RpcServer: RpcServer.handler=8,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=7,port=60000] ipc.RpcServer: RpcServer.handler=7,port=60000: exiting
2014-02-24 10:11:32,306 INFO [RpcServer.handler=6,port=60000] ipc.RpcServer: RpcServer.handler=6,port=60000: exiting
2014-02-24 10:11:32,307 INFO [RpcServer.handler=5,port=60000] ipc.RpcServer: RpcServer.handler=5,port=60000: exiting
2014-02-24 10:11:32,307 INFO [RpcServer.handler=4,port=60000] ipc.RpcServer: RpcServer.handler=4,port=60000: exiting
2014-02-24 10:11:32,307 INFO [RpcServer.handler=3,port=60000] ipc.RpcServer: RpcServer.handler=3,port=60000: exiting
2014-02-24 10:11:32,307 INFO [RpcServer.handler=2,port=60000] ipc.RpcServer: RpcServer.handler=2,port=60000: exiting
2014-02-24 10:11:32,307 INFO [RpcServer.handler=1,port=60000] ipc.RpcServer: RpcServer.handler=1,port=60000: exiting
2014-02-24 10:11:32,307 INFO [RpcServer.handler=0,port=60000] ipc.RpcServer: RpcServer.handler=0,port=60000: exiting
2014-02-24 10:11:32,930 INFO [master:hadoop-master:60000] mortbay.log: Stopped SelectChannelConnector#0.0.0.0:60010
2014-02-24 10:11:32,945 INFO [master:hadoop-master:60000] client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1446360aa4a0001
2014-02-24 10:11:32,948 INFO [master:hadoop-master:60000] zookeeper.ZooKeeper: Session: 0x1446360aa4a0001 closed
2014-02-24 10:11:32,949 INFO [master:hadoop-master:60000-EventThread] zookeeper.ClientCnxn: EventThread shut down
2014-02-24 10:11:32,954 INFO [master:hadoop-master:60000] zookeeper.ZooKeeper: Session: 0x1446360aa4a0000 closed
2014-02-24 10:11:32,954 INFO [master:hadoop-master:60000] master.HMaster: HMaster main thread exiting
2014-02-24 10:11:32,955 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2014-02-24 10:11:32,955 ERROR [main] master.HMasterCommandLine: Master exiting
**java.lang.RuntimeException: HMaster Aborted**
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:192)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2787)
Have you had upgrade from HBase 0.94.x to 0.96.x? They are pretty incompatible starting from migration to protobuffers as RPC mechanics and yes, changes in meta-tables approach.
Please be sure you have checked upgrade documentation.
http://hbase.apache.org/upgrading.html#upgrade0.96
Please pay special attention to ZooKeeper service.

Resources