I am tring to connect HBASE with jasperreports-server-cp-6.0.1. I have hadoop 2.5.2 and hbase-1.0.1 installed on my system.
I have installed HBasePlugin-0.5.1.nbm plugin in iReport 5.6.0.
I have followed all the steps given in: http://community.jaspersoft.com/wiki/hadoop-hbase
When I write the following Query:
{ "tableName" : "blogposts", "deserializerClass" : "com.jaspersoft.hbase.deserialize.impl.ShellDeserializer" }
In iReport, I am getting the following error:
Message:
net.sf.jasperreports.engine.JRException: No deserializer defined
Level:
SEVERE
Stack Trace:
No deserializer defined
com.jaspersoft.hadoop.hbase.query.HBaseQueryWrapper.<init>(HBaseQueryWrapper.java:152)
com.jaspersoft.hadoop.hbase.HBaseFieldsProvider.getFields(HBaseFieldsProvider.java:50)
com.jaspersoft.ireport.hbase.designer.HBaseFieldsProvider.getFields(HBaseFieldsProvider.java:57)
com.jaspersoft.ireport.hbase.connection.HBaseConnection.readFields(HBaseConnection.java:185)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
Could you please help me with this error (I also tried with iReport 4.0.2, but I received the same error)?
Both iReport and the HBase connector are outdated.
Try using the Apache Phoenix JDBC driver which is compatible with the latest release (6.2) of the Jaspersoft products:
http://community.jaspersoft.com/wiki/how-use-apache-phoenix-jdbc-driver-run-reports-hbase
Thanks!
Related
I'm having trouble using the GeoMondrian Workbench on my Ubuntu 18.04 LTS system. I've followed the installation instructions and have installed the following:
Oracle Java 8
PostgreSQL 9.5
PostGIS 2.5
I've downloaded the GeoMondrian Workbench and the "simple_geofoodmart.sql" file, created a database, and passed the necessary parameters to the workbench.
However, when I try to open the "simple_geofoodmart.xml" schema file, I get the following error:
Error: Schema file /home/tarik/workbench/demo/simple_geofoodmart.xml is invalid. org/opengis/referencing/NoSuchAuthorityCodeException java.lang.NoClassDefFoundError: org/opengis/referencing/NoSuchAuthorityCodeException
Additionally, when I try to use an MDX query, I get the following error:
"Exception in thread 'AWT-EventQueue-0' java.lang.NoClassDefFoundError: Could not initialize class mondrian.olap.fun.GlobalFunTable at mondrian.rolap.RolapSchema$RolapSchemaFunctionTable.defineFunctions(RolapSchema.java:1643)"
I've tried to resolve all the dependencies and have made sure that I have the necessary JAR files in my classpath, but I'm still getting the same errors. Can anyone help me figure out what's going wrong and how to fix it? Thank you!
I am able to connect to the cluster, browse its hive catalog, see tables/views and columns/datatypes
Running a simple select statement from a view on a parquet file produces this error and no other results:
SQL Error [500540] [HY000]: [Databricks][DatabricksJDBCDriver](500540) Error caught in BackgroundFetcher. Foreground thread ID: 180. Background thread ID: 223. Error caught: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available.
Standard Databricks cluster:
Standard_DS3_v2
JDBC URL:
jdbc:databricks://<reducted>.1.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/<reducted>/<reducted>;AuthMech=3;UID=token;PWD=<reducted>
Advanced Options Spark Config:
spark.databricks.cluster.profile singleNode
spark.databricks.io.directoryCommit.createSuccessFile false
spark.master local[*, 4]
spark.driver.extraJavaOptions -Dio.netty.tryReflectionSetAccessible=true
spark.hadoop.fs.azure.account.key.<reducted>.blob.core.windows.net <reducted>
spark.executor.extraJavaOptions -Dio.netty.tryReflectionSetAccessible=true
parquet.enable.summary-metadata false
My local machine:
Dbeaver Version 22.1.2.202207091909
MacOS version (M1 chip): Monterey 12.4
Java version:
java --version
openjdk 18.0.1 2022-04-19
OpenJDK Runtime Environment Homebrew (build 18.0.1+0)
OpenJDK 64-Bit Server VM Homebrew (build 18.0.1+0, mixed mode, sharing)
I am able to do the following with no errors (Databricks default test dataset):
CREATE TABLE diamonds USING CSV OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", header "true");
When I run this select color from diamonds; or this select * from diamonds;
I get this:
SQL Error [500618] [HY000]: [Databricks][DatabricksJDBCDriver](500618) Error occured while deserializing arrow data: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
Hence, any select query on any object (parquet file or anything else) causes the error described above.
What could be the problem? Any recommendations how to resolve this error? Why am I able to connect and see the metadata of the schemas/tables/views/columns, but not query or view the data?
P.S. I followed this guide exactly: https://learn.microsoft.com/en-us/azure/databricks/dev-tools/dbeaver#step-3-connect-dbeaver-to-your-azure-databricks-databases
Trying to persist hive table from storm-hive client, Getting following logs in HiveMetastoreServer logs.
020-02-26 23:20:27,748 ERROR org.apache.thrift.server.TThreadPoolServer: [pool-8-thread-178]: Error occurred during processing of message.
java.lang.IllegalStateException: Unexpected **DataOperationType: UNSET** agentInfo=Unknown txnid:1641
at org.apache.hadoop.hive.metastore.txn.TxnHandler.enqueueLockWithRetry(TxnHandler.java:906) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:781) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
Try to use explode instead of unnest! check this:
https://stackoverflow.com/a/51846380/9185215
I have downgraded storm-hive client to 1.2.3 from 2.1.0. And also excluded hive dependency jars from storm-hive 1.2.3 and added hive client version 2.1.1 to match my cloudera environment.
I find similar error on https://issues.apache.org/jira/browse/KYLIN-2511
env:
hadoop-2.7.1
hbase-1.3.2
apache-hive-2.1.1-bin
apache-kylin-1.6.0-hbase1.x-bin
I've tried copy all the hive libs to kylin, but get another ERROR.
org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoClassDefFoundError: org/apache/hadoop/hive/serde2/typeinfo/TypeInfo
The missing class should be in hive-exec-.jar; Check and debug the "bin/find-hive-dependency.sh" to see why it wasn't able to locate this jar from your server. You can manually add it to the "hive_exec_path" variable.
BTW, Kylin 1.6 is quite old, try to upgrade to a 2.x version.
Why you just try the method mentioned in https://issues.apache.org/jira/browse/KYLIN-2511. You'd better prepare the env according to the document of v16. It is better for using the latest version of Kylin. It has more feature and fixes some bugs.
We are currently using the 2.0.0-M06 snapshot version of the neo4j jdbc driver and are trying to use the latest version available. We found the 2.1.4 version on the maven repository below,
https://m2.neo4j.org/content/repositories/releases/org/neo4j/neo4j-jdbc/
However, while trying to use this we see the below error..
Caused by: java.lang.IllegalStateException: Error during parsing
at org.neo4j.jdbc.rest.StreamingParser$ParserState.nextToken(StreamingParser.java:71)
at org.neo4j.jdbc.rest.StreamingParser.skipTo(StreamingParser.java:313)
at org.neo4j.jdbc.rest.StreamingParser.nextResult(StreamingParser.java:130)
at org.neo4j.jdbc.rest.StreamingParser$2.hasNext(StreamingParser.java:265)
at org.neo4j.jdbc.rest.StreamingParser$2$1.endReached(StreamingParser.java:269)
at org.neo4j.jdbc.rest.StreamingParser$1.hasNext(StreamingParser.java:201)
at org.neo4j.jdbc.IteratorResultSet.hasNext(IteratorResultSet.java:98)
at org.neo4j.jdbc.IteratorResultSet.next(IteratorResultSet.java:63)
at com.mchange.v2.c3p0.impl.NewProxyResultSet.next(NewProxyResultSet.java:2859)
... 92 more
Caused by: java.io.IOException: Stream closed
at sun.nio.cs.StreamDecoder.ensureOpen(StreamDecoder.java:46)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:148)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at org.codehaus.jackson.impl.ReaderBasedParser.loadMore(ReaderBasedParser.java:117)
at org.codehaus.jackson.impl.ReaderBasedParser._skipWSOrEnd(ReaderBasedParser.java:1476)
at org.codehaus.jackson.impl.ReaderBasedParser.nextToken(ReaderBasedParser.java:368)
at org.neo4j.jdbc.rest.StreamingParser$ParserState.nextToken(StreamingParser.java:67)
... 100 more
We found a reference that this is addressed in the 2.2 version of the driver and are therefore trying to download that. Can someone please point us in the right direction in getting this 2.2 binary for the neo4j-jdbc driver? Also, we currently use the neo4j 2.2 version for our db server.
Thx,
NN
I think version 2.2 is not released yet.
You can try to build your own binaries from the source code - https://github.com/neo4j-contrib/neo4j-jdbc