Create Hive External Table on S3 throws "org.apache.hadoop.fs.s3a.S3AFileSystem not found" Exception - hadoop

I'm using beeline on my local machine to run below DDL, and throws the exception.
the DDL is
CREATE TABLE `report_landing_pages`(
`google_account_id` string COMMENT 'from deserializer',
`ga_view_id` string COMMENT 'from deserializer',
`path` string COMMENT 'from deserializer',
`users` string COMMENT 'from deserializer',
`page_views` string COMMENT 'from deserializer',
`event_value` string COMMENT 'from deserializer',
`report_date` string COMMENT 'from deserializer')
PARTITIONED BY (`dt` date)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.OpenCSVSerde'
STORED AS TEXTFILE
LOCATION 's3a://bucket_name/table'
the exception is
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found) at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257) at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:862) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:867) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4356) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:354) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) ... 11 more Caused by: MetaException(message:java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:42070) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:42038) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result.read(ThriftHiveMetastore.java:41964) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_table_with_environment_context(ThriftHiveMetastore.java:1199) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_table_with_environment_context(ThriftHiveMetastore.java:1185) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:2399) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:93) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:752) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:740) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173) at com.sun.proxy.$Proxy34.createTable(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2330) at com.sun.proxy.$Proxy34.createTable(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:852) ... 22 more
And my local HDFS works fine with "hdfs dfs -mkdir s3a://bucket/table"
And the wierd thing is, if I created the table not on S3 first, and then manually update the location of the table to s3 in metastore later, the select statement like
select COUNT(*) from report_landing_pages group by google_account_id
works fine.
How to fix exception in DDL?
BTW, I'm running with Hive 2.3.2, Hadoop 2.7.5 under MacOS X EI Caption.

Problem solved.
After place the S3 jars, the metastore service should also be restarted, besides hiveserver2.

Did you place all the required jars s3 etc to the classpath?
(org.apache.hadoop.fs.s3a.S3AFileSystem)- this is a Hadoop class, found in the Hadoop-aws jar. An exception reporting one of these classes is missing means that this jar is not in the class path.

I tried creating the same table in my environment and it worked.
Check your : fs.s3a.access.key
"fs.s3a.secret.key","fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem"
properties in the configuration file.

Yes, that's correct. After placing the jar, the hive metastore service needs to be restarted
To restart the hive metastore, below steps needs to be followed:
1. ps -ef | grep 'hive'
Use above command to identify the Process ID (PID) that hive is using. By default, the hive used 9083 port number, so for a running hive metastore service, you can also check the PID using lsof -i:9083 command.
2. kill <process number>
This kills the existing hive metastore service.
3. hive --service metastore
This command starts the metastore service again.
OR
If you are using hive-server 2, use the following command:
$ sudo /etc/init.d/hive-metastore start
$ sudo /etc/init.d/hive-metastore stop

Related

EMR - Sqoop import using hcatalog failing on EMR

I am using EMR - 6.4.0 (sqoop version 1.4.7 )
and using oozie workflow to import data from postgres into hive partitions using hcatalog,data is getting loaded into the table and partitions are getting created but job fails with the following error
Job commit failed: java.lang.UnsupportedOperationException: getTokenStrForm is not supported
at com.amazonaws.glue.catalog.metastore.GlueMetastoreClientDelegate.getTokenStrForm(GlueMetastoreClientDelegate.java:1630)
at com.amazonaws.glue.catalog.metastore.AWSCatalogMetastoreClient.getTokenStrForm(AWSCatalogMetastoreClient.java:611)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.invoke(HiveClientCache.java:590)
at com.sun.proxy.$Proxy109.getTokenStrForm(Unknown Source)
at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.cancelDelegationTokens(FileOutputCommitterContainer.java:1012)
at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.commitJob(FileOutputCommitterContainer.java:273)
at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:286)
at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:238)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I dont see any mention about this error or limitation in EMR docs or in web.
Importing directly to table directory is working but wanted to know why hcatalog option couldn't be used

/var/mapr/cluster/yarn/rm/staging/username/.staging/job_id_1234_4321/libjars/myudfs.jar (Invalid argument)

We are using Mapr with PAM Authentication to execute select query on hive. Select query uses a myudfs.jar where we have defined our UDFs.
I have tried many links but couldn't figure out why this is happening. From the stacktrace it seems like Hadoop is not able to copy the jar into libjars directory inside /.staging. But UDF is there in the classpath.
Any help would be really appreciated.
java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. 2063.142.526190 /var/mapr/cluster/yarn/rm/staging/username/.staging/job_112233333_0002/libjars/myudfs.jar (Invalid argument)
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: 2063.142.526190 /var/mapr/cluster/yarn/rm/staging/username/.staging/job_112233333_0002/libjars/myudfs.jar (Invalid argument)
at com.mapr.fs.Inode.throwIfFailed(Inode.java:390)
at com.mapr.fs.Inode.flushPages(Inode.java:505)
at com.mapr.fs.Inode.releaseDirty(Inode.java:583)
at com.mapr.fs.MapRFsOutStream.dropCurrentPage(MapRFsOutStream.java:73)
at com.mapr.fs.MapRFsOutStream.write(MapRFsOutStream.java:85)
at com.mapr.fs.MapRFsDataOutputStream.write(MapRFsDataOutputStream.java:39)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:87)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:376)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:346)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:297)
at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:98)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:193)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:414)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:201)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:283)
Following is the process how we execute the query and what we do before executing it.
First we create a function as given below
CREATE FUNCTION func_name AS 'com.test.ClassContainingMapper'
This ClassContainingMapper class is packed in a udf jar named myudf.jar. We have this jar added in the classpath by using property hive.aux.jars.path.
Now, our query is as follows:
Select func_name(col1, col2) from dbname.tablename;
When this query is executed Hadoop tries to upload the jars mentioned in the classpath to it's libjars folder under staging directory. That's when it fails.
Interesting thing is that on one similar cluster this query executes successfully but on other cluster it is being failed with the exception mentioned in the stacktrace.
UPDATE:
Actually there is another query executed before the Select query. It's
ADD JAR '/path/to/the/jar/file/myudf.jar';
This makes more sense that when this query is executed then the jar is uploaded to the cluster. And during that action of uploading query fails.

HBase tables disappear

In a psuedo distribute hadoop installation ( hbase, yarn, hdfs ), at one point all HBase's tables disappeared, how can i fix them?
In hbase shell: "list" returns 0 row(s)
Creating a table among those dissapeared named transactions:
Api Error: org.apache.hadoop.hbase.TableExistsException: transactions
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.convertResult(HBaseAdmin.java:4352)
at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4310)
at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4244)
at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:649)
at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:579)
at org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.createTable(ThriftServerRunner.java:1192)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67)
at com.sun.proxy.$Proxy10.createTable(Unknown Source)
at org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$createTable.getResult(Hbase.java:4018)
at org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$createTable.getResult(Hbase.java:4002)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:289)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException): transactions
at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:300)
at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:107)
at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:58)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:427)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:999)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:803)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:756)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:75)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:441)
look like problem with the zookeeper.
table is not present in the hbase but actually zookeeper keep all the table information i.e. metadata about table.
So you have run zookeeper cli and remove the /hbase directory from the zookeeper by using this command
rmr /hbase
and restart the hbase again it will resolve the problem.

Problems with Apache Spark to consult a table

I try spark and hive,
I want select one table
hiveContext.hql("select * from final_table").collect()
but I have this error
ERROR Hive: NoSuchObjectException(message:default.final_table table not found)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1569)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
at com.sun.proxy.$Proxy27.get_table(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1008)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)
at com.sun.proxy.$Proxy28.getTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1000)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:974)
at org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:70)
at org.apache.spark.sql.hive.HiveContext$$anon$2.org$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:253)
at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
at scala.Option.getOrElse(Option.scala:120)
but when I try this
hiveContext.hql("CREATE TABLE IF NOT EXISTS TestTable (key INT, value STRING)")
I haven't any problem and the table is created.
Any ideas about this problem, any solution?
Thanks!
Which command do you use to start Spark? Most likely you haven't set up the usage of Hive metastore the right way, which means each time you start your cluster you are creating new temporary local metastore. To use the Hive metastore, follow these guides: (Run Spark with build-in Hive and Configuring a remote PostgreSQL database for the Hive Metastore and https://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables). This way the tables created by you will persist between cluster restarts, in Hive metastore.

Using Phoenix to help to integrate elastic-search and Hbase. When use sqlline.py,to create table, bad happens

I follow the instruction Connecting Hbase to Elasticsearch in 10 min or less. Everything goes fine before the step: Create a table in Hbase using SQLline. When I type $ $PHOENIX_HOME/hadoop1/bin/sqlline.py localhost , the terminal shows:
znbee#znbee-Aspire-V5-452G:~/phoenix-4.1.0-bin/hadoop1$ bin/sqlline.py localhost
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost
14/12/19 11:35:03 WARN util.Tracing: Tracing will outputs will not be written to any metrics sink! No TraceMetricsSink found on the classpath
java.lang.RuntimeException: Could not create interface org.apache.phoenix.trace.PhoenixSpanReceiver Is the hadoop compatibility jar on the classpath?
at org.apache.hadoop.hbase.CompatibilityFactory.getInstance(CompatibilityFactory.java:60)
at org.apache.phoenix.trace.TracingCompat.newTraceMetricSource(TracingCompat.java:40)
at org.apache.phoenix.trace.util.Tracing.addTraceMetricsSource(Tracing.java:294)
at org.apache.phoenix.jdbc.PhoenixConnection.<clinit>(PhoenixConnection.java:125)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1516)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1489)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1489)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:129)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)
at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4650)
at sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4701)
at sqlline.SqlLine$Commands.connect(SqlLine.java:3942)
at sqlline.SqlLine$Commands.connect(SqlLine.java:3851)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810)
at sqlline.SqlLine.dispatch(SqlLine.java:817)
at sqlline.SqlLine.initArgs(SqlLine.java:633)
at sqlline.SqlLine.begin(SqlLine.java:680)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)
Caused by: java.util.NoSuchElementException
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:357)
at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
at org.apache.hadoop.hbase.CompatibilityFactory.getInstance(CompatibilityFactory.java:46)
... 24 more

Resources