accumulo init failed to connect to zookeeper - hadoop

when i initial the accumulo with the command: accumulo init, error appears, cannot connect to the zookeeper,however the zookeeper is running: INFO:zookeeper is localhost:2181
Thread "init" died null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.accumulo.start.Main$1.run(Main.java:101)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Failed to connect to zookeeper (localhost:2181) within 2x zookeeper timeout period 30000
at org.apache.accumulo.server.util.Initialize.main(Initialize.java:498)
... 6 more
Caused by: java.lang.RuntimeException: Failed to connect to zookeeper (localhost:2181) within 2x zookeeper timeout period 30000
at org.apache.accumulo.fate.zookeeper.ZooSession.connect(ZooSession.java:96)
at org.apache.accumulo.fate.zookeeper.ZooSession.getSession(ZooSession.java:146)
at org.apache.accumulo.fate.zookeeper.ZooReader.getSession(ZooReader.java:36)
at org.apache.accumulo.fate.zookeeper.ZooReaderWriter.getZooKeeper(ZooReaderWriter.java:53)
at org.apache.accumulo.fate.zookeeper.ZooReader.exists(ZooReader.java:70)
at `enter code here`org.apache.accumulo.server.util.Initialize.zookeeperAvailable(Initialize.java:197)
at org.apache.accumulo.server.util.Initialize.doInit(Initialize.java:122)
at org.apache.accumulo.server.util.Initialize.main(Initialize.java:494)
... 6 more

Related

Geomesa-accumulo add index fail job

have a problem with geomesa failed on adding indexes, maybe someones know where problem is?
geomesa-accumulo add-attribute-index -u root -p xxx -c xxx_dev_test -a asset_id --coverage full -f telemetry_values
DEBUG Looking up Accumulo Instance Id in Zookeeper for 5000 milliseconds.
DEBUG You can specify the Instance Id via the command line or
change the Zookeeper timeout by setting the system property 'instance.zookeeper.timeout'.
INFO Running map reduce index job for attributes: [asset_id] with coverage: full...
ERROR Error encountered running attribute index command. Check hadoop's job history logs for more information.
Found that no jobs created in hadoop so no logs, but in tserver logs I found
2021-01-25 12:32:05,129 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
2021-01-25 12:32:05,202 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
hadoop 3.1
accumulo 1.9.3
geomesa-accumulo 2.4.0
any advice?
geomesa logs, looks same as error as zookeeper
2021-01-25 13:29:38,762 DEBUG [org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator] IOException thrown
java.io.IOException: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:760)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:158)
at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.flush(ThriftTransportPool.java:346)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.send_startMultiScan(TabletClientService.java:326)
at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startMultiScan(TabletClientService.java:308)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:684)
... 6 more
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:475)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:159)
... 13 more
here more logs from geomesa, seems some problem with job creation
2021-01-25 13:54:36,873 WARN [org.apache.hadoop.mapred.LocalJobRunner] job_local1471203421_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.lang.NullPointerException
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:103)
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:102)
at org.locationtech.geomesa.utils.io.WithStore.apply(WithStore.scala:37)
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper.setup(AttributeIndexJob.scala:102)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
error from mapred job
Exception in thread "main" java.lang.NumberFormatException: For input string: "local1471203421"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.apache.hadoop.mapreduce.TypeConverter.toClusterTimeStamp(TypeConverter.java:111)
at org.apache.hadoop.mapreduce.TypeConverter.toYarn(TypeConverter.java:82)
at org.apache.hadoop.mapred.ClientServiceDelegate.<init>(ClientServiceDelegate.java:121)
at org.apache.hadoop.mapred.ClientCache.getClient(ClientCache.java:68)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:870)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:215)
at org.apache.hadoop.mapreduce.tools.CLI.getJob(CLI.java:660)
at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:470)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)
hadoop 3.1 not support this feature, need 3.2 update

Hadoop cluster - Hive can not start after reboot

I have hadoop cluster of 5 nodes running.
Hive was running fine and could create tables, add data etc.
Then tried rebooting all 5 nodes and now Hive can not start.
Using MySql as Metastore.
What could be the issue and how to fix it? Logs when trying to start hive:
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:556)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:214)
at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:332)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:293)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:268)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:529)
... 8 more
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1530)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3230)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3249)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3474)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:225)
at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:209)
... 12 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
... 19 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
To resolve this issue, use local Hive Metastore for Spark Thrift Server by adding the following property under /etc/spark/conf/hive-site.xml:
<property>
<name>hive.metastore.uris</name>
<value>thrift://localost:9083</value>
</property>

Why does Hadoop on Windows trying to connect 0.0.0.0:10020 (unsuccessfully)?

I have installed Hadoop on Windows according to this artile and now am able to run test application hadoop-mapreduce-examples-X.Y.Z.jar.
Unfortunately, when I am starting full-scale application, it starts to access some strange address 0.0.0.0:10020. Have changed my DFS config to <value>hdfs://0.0.0.0</value> but this didn't help.
Exception is following:
[Thread-14] INFO org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob - Job status available at: http://lagrangian:8088/proxy/application_1525212500911_0002/
[Thread-14] ERROR org.apache.crunch.impl.mr.exec.MRExecutor - Pipeline failed due to exception
java.io.IOException: java.io.IOException: java.net.ConnectException: Call From lagrangian/169.254.105.43 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.crunch.impl.mr.exec.CrunchJobHooks$CompletionHook.handleMultiPaths(CrunchJobHooks.java:92)
at org.apache.crunch.impl.mr.exec.CrunchJobHooks$CompletionHook.run(CrunchJobHooks.java:79)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.checkRunningState(CrunchControlledJob.java:288)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.checkState(CrunchControlledJob.java:299)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.checkRunningJobs(CrunchJobControl.java:193)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java:313)
at org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:131)
at org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:58)
at org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:90)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.net.ConnectException: Call From lagrangian/169.254.105.43 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:344)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:429)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:617)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:320)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:320)
at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:616)
at org.apache.crunch.impl.mr.exec.CrunchJobHooks$CompletionHook.handleMultiPaths(CrunchJobHooks.java:84)
... 9 more
Caused by: java.net.ConnectException: Call From lagrangian/169.254.105.43 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1493)
at org.apache.hadoop.ipc.Client.call(Client.java:1435)
at org.apache.hadoop.ipc.Client.call(Client.java:1345)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy20.getJobReport(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:325)
... 19 more
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788)
at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1550)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 28 more
I read it is probably related with Job History Server, but I am not sure, how to run it on Windows.
Probably because the JobHistory server isn't started. You can run it using
mapred historyserver
Should be very similar between Windows and Linux. Check log output and jps to verify it's running.
Your service addresses should ideally be a hostname (but not localhost), while 0.0.0.0 will make them listen on all addresses

Hive Derby Metastore Configuration

I have configured my Hive Over Hadoop and Hbase using tutorialspoint
http://www.tutorialspoint.com/hive/hive_installation.htm
Facing this issue with HIVE. Please Help
Logging initialized using configuration in jar:file:/usr/local/hive/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties. Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)Caused by: javax.jdo.JDOFatalDataStoreException: Unable to open a test connection to the given database. JDBC url = jdbc:derby://localhost:1527/metastore_db;create=true, username = APP. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
java.sql.SQLNonTransientConnectionException: java.net.ConnectException : Error connecting to server localhost on port 1527 with message Connection refused.
at org.apache.derby.client.am.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.client.am.SqlException.getSQLException(Unknown Source)
at org.apache.derby.jdbc.ClientDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:501)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:298)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
It looks like you haven't started Derby database. Take a look at this Hive doc: https://cwiki.apache.org/confluence/display/Hive/HiveDerbyServerMode. There is a short paragraph on how to start Derby.
Derby Server doesn't run.So it hive throws connection refused exception.
GOto %DERBY_HOME%\bin and StartNetworkServer.cmd file
Now restart hive services.
Hope this really helpful for you

Hadoop YARN setup authentication issue

I follow the instruction of this page to install single machine yarn cluster http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-project-dist/hadoop-common/SingleCluster.html
But when I run the example jar, the job hang there and I check the log of resource manager, find the following error (the first is client side log, the second is resource manager log )
(Client side)
13/10/18 17:30:36 ERROR security.UserGroupInformation: PriviledgedActionException as:zhangj82 (auth:SIMPLE) cause:java.io.IOException
java.io.IOException
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:326)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:385)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:526)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:313)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:310)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:310)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:594)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1277)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1239)
at org.apache.hadoop.examples.RandomWriter.run(RandomWriter.java:283)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.RandomWriter.main(RandomWriter.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Resource Manager
2013-10-18 17:35:26,128 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8040: readAndProcess threw exception javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier] from client 127.0.0.1. Count of bytes read: 0
javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier]
at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:594)
at com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:244)
at org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1173)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1350)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:726)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:525)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:500)
Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier
at org.apache.hadoop.security.SaslRpcServer.getIdentifier(SaslRpcServer.java:112)
at org.apache.hadoop.security.SaslRpcServer$SaslDigestCallbackHandler.handle(SaslRpcServer.java:217)
at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:585)
... 6 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at org.apache.hadoop.io.Text.readFields(Text.java:306)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.readFields(AbstractDelegationTokenIdentifier.java:186)
at org.apache.hadoop.security.SaslRpcServer.getIdentifier(SaslRpcServer.java:109)
... 8 more
2013-10-18 17:35:26,308 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1382088798449_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING
This bug been raised in Hadoop issues. Basically to overcome this you may apply source level patch as described in BlockTokenSecretManager or try to update to latest version of Hadoop

Resources