I am having some difficulty creating a column family (table) in cassandra via the cassandra-jdbc driver.
The cql command works correctly in cqlsh, but doesn't when using cassandra jdbc. I suspect this is something to do with the way I have defined my connection string. Any help would be greatly helpful.
Let me try and explain what I have done.
I have created a keyspace using cqlsh with the following command
CREATE KEYSPACE authdb WITH
REPLICATION = {
'class' : 'SimpleStrategy',
'replication_factor' : 1
};
This is as per the documentation at: http://www.datastax.com/docs/1.2/cql_cli/cql/CREATE_KEYSPACE#cql-create-keyspace
I am able to create a table (column family) in cqlsh using
CREATE TABLE authdb.users(
user_name varchar PRIMARY KEY,
password varchar,
gender varchar,
session_token varchar,
birth_year bigint
);
This works correctly.
My problems start when I try to create the table using cassandra-jdbc-1.2.1.jar
The code I use is:
public static void createColumnFamily() {
try {
Class.forName("org.apache.cassandra.cql.jdbc.CassandraDriver");
Connection con = DriverManager.getConnection("jdbc:cassandra://localhost:9160/authdb?version=3.0.0");
String qry = "CREATE TABLE authdb.users(" +
"user_name varchar PRIMARY KEY," +
"password varchar," +
"gender varchar," +
"session_token varchar," +
"birth_year bigint" +
")";
Statement smt = con.createStatement();
smt.executeUpdate(qry);
con.close();
} catch (Exception e) {
e.printStackTrace();
}
When using cassandra-jdbc-1.2.1.jar I get the following error:
main DEBUG jdbc.CassandraDriver - Final Properties to Connection: {cqlVersion=3.0.0, portNumber=9160, databaseName=authdb, serverName=localhost}
main DEBUG jdbc.CassandraConnection - Connected to localhost:9160 in Cluster 'authdb' using Keyspace 'Test Cluster' and CQL version '3.0.0'
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Ljava/nio/ByteBuffer;Lorg/apache/cassandra/thrift/Compression;Lorg/apache/cassandra/thrift/ConsistencyLevel;)Lorg/apache/cassandra/thrift/CqlResult;
at org.apache.cassandra.cql.jdbc.CassandraConnection.execute(CassandraConnection.java:447)
Note: the cluster and key space are not correct
When using cassandra-jdbc-1.1.2.jar I get the following error:
main DEBUG jdbc.CassandraDriver - Final Properties to Connection: {cqlVersion=3.0.0, portNumber=9160, databaseName=authdb, serverName=localhost}
main INFO jdbc.CassandraConnection - Connected to localhost:9160 using Keyspace authdb and CQL version 3.0.0
java.sql.SQLSyntaxErrorException: Cannot execute/prepare CQL2 statement since the CQL has been set to CQL3(This might mean your client hasn't been upgraded correctly to use the new CQL3 methods introduced in Cassandra 1.2+).
Note: in this instance the cluster and keyspace appear to be correct.
The error when using the 1.2.1 jar is because you have an old version of the cassandra-thrift jar. You need to keep that in sync with the cassandra-jdbc version. The cassandra-thrift jar is in the lib directory of the binary download.
Related
I have some tests in error after upgrading from Spring boot 2.5.6 to 2.7.3.
For information we use Oracle for the database and h2 for tests.
I have some tests in failure with the following error:
Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException
In fact, the version of h2 was 1.4.200 before and is 2.1.214 now and a lot of things seem to have changed. The reason of the error is not always the same according to the test in error. Sometimes it is an error with a table not found (not solved yet), sometimes it is an error with "Values of types "BOOLEAN" and "INTEGER" are not comparable" (solved by updating a query where a comparison was done with a boolean column like this myBoolean = 0 and it has been updated to myBoolean = false) and I also have an error on a query done with a PageRequest.
For this last case, I have a Controller like this:
public Page<MyEntity> doSomething() {
final Sort sort = Sort.by(Order.desc("column1"));
final PageRequest pageRequest = PageRequest.of(0, 1000, sort);
return myEntityRepository.findAll(pageRequest);
}
But I have an error like that:
Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Syntax error in SQL statement "select myentity0_.id as id1_47_, myentity0_.column1 as column1_47_, myentity0_.column2 as column2_47_ from my_table myentity0_ order by myentity0_.column1 desc [*]limit ?"; SQL statement:
select myentity0_.id as id1_47_, myentity0_.column1 as column1_47_, myentity0_.column2 as column2_47_ from my_table myentity0_ order by myentity0_.column1 desc limit ? [42000-214]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:502)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:477)
at org.h2.message.DbException.get(DbException.java:223)
at org.h2.message.DbException.get(DbException.java:199)
at org.h2.message.DbException.getSyntaxError(DbException.java:247)
at org.h2.command.Parser.getSyntaxError(Parser.java:898)
at org.h2.command.Parser.prepareCommand(Parser.java:572)
at org.h2.engine.SessionLocal.prepareLocal(SessionLocal.java:631)
at org.h2.engine.SessionLocal.prepareCommand(SessionLocal.java:554)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1116)
at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:92)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:288)
at com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:337)
at com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:149)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:176)
... 205 more
If I change the Controller like this,the test is in success:
public Page<MyEntity> doSomething() {
List<MyEntity> result = myEntityRepository.findAll();
return new PageImpl<MyEntity>(result);
}
So It seems that the problem was due by the use of PageRequest.
Do you have an idea please?
Java persistence libraries are usually tested only with default Regular mode of H2 and may not work well with other modes.
Oracle doesn't support MySQL/PostgreSQL-style LIMIT, and H2 doesn't allow it in Oracle compatibility mode, but some libraries produce LIMIT instead of standard OFFSET / FETCH for H2.
Spring Data JDBC (spring-data-relational) added support of custom compatibility modes of H2 only about a month ago and version 2.4.3 with this fix isn't released yet.
Hibernate ORM 6.*.* should work well, but Hibernate ORM 5.6.* has a known issue:
https://hibernate.atlassian.net/jira/software/c/projects/HHH/issues/HHH-15318
You can enable LIMIT in Oracle compatibility mode of H2 as a temporary workaround. To do that, you need to execute the following Java code during initialization of your application:
org.h2.engine.Mode mode = org.h2.engine.Mode.getInstance("ORACLE");
mode.limit = true;
I have a spring boot gradle project with a mysql database. Previously under jooq version 3.13.6 my sql was parsed without errors. When updating to a higher jooq version (3.14.X and 3.15.X) and generating/parsing the migrations with jooq, I get the following output:
SEVERE DDLDatabase Error: Your SQL string could not be parsed or
interpreted. This may have a variety of reasons, including:
The jOOQ parser doesn't understand your SQL
The jOOQ DDL simulation logic (translating to H2) cannot simulate your SQL
org.h2.jdbc.JdbcSQLSyntaxErrorException: Function "coalesce" not found;
A basic sql example where the error occurs is given below. Parsing the same view worked with jooq 3.13.6.
DROP VIEW IF EXISTS view1;
CREATE VIEW view1 AS
SELECT COALESCE(SUM(table1.col1), 0) AS 'sum'
FROM table1;
I am currently lost here. I don't see any related changes in the jooq changelog.
Any help or directions to further have a look into are highly appreciated.
Extended Stacktrace:
11:10:30 SEVERE DDLDatabase Error : Your SQL string could not be parsed or interpreted. This may have a variety of reasons, including:
- The jOOQ parser doesn't understand your SQL
- The jOOQ DDL simulation logic (translating to H2) cannot simulate your SQL
If you think this is a bug or a feature worth requesting, please report it here: https://github.com/jOOQ/jOOQ/issues/new/choose
As a workaround, you can use the Settings.parseIgnoreComments syntax documented here:
https://www.jooq.org/doc/latest/manual/sql-building/dsl-context/custom-settings/settings-parser/
11:10:30 SEVERE Error while loading file: /Users/axel/projects/service/./src/main/resources/db/migration/V5__create_view1.sql
11:10:30 SEVERE Error in file: /Users/axel/projects/service/build/tmp/generateJooq/config.xml. Error : Error while exporting schema
org.jooq.exception.DataAccessException: Error while exporting schema
at org.jooq.meta.extensions.AbstractInterpretingDatabase.connection(AbstractInterpretingDatabase.java:103)
at org.jooq.meta.extensions.AbstractInterpretingDatabase.create0(AbstractInterpretingDatabase.java:77)
at org.jooq.meta.AbstractDatabase.create(AbstractDatabase.java:332)
at org.jooq.meta.AbstractDatabase.create(AbstractDatabase.java:322)
at org.jooq.meta.AbstractDatabase.setConnection(AbstractDatabase.java:312)
at org.jooq.codegen.GenerationTool.run0(GenerationTool.java:531)
at org.jooq.codegen.GenerationTool.run(GenerationTool.java:237)
at org.jooq.codegen.GenerationTool.generate(GenerationTool.java:232)
at org.jooq.codegen.GenerationTool.main(GenerationTool.java:204)
Caused by: org.jooq.exception.DataAccessException: SQL [create view "view1" as select "coalesce"("sum"("table1"."col1"), 0) "sum" from "table1"]; Function "coalesce" not found; SQL statement:
create view "view1" as select "coalesce"("sum"("table1"."col1"), 0) "sum" from "table1" [90022-200]
at org.jooq_3.15.5.H2.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2988)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:639)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349)
at org.jooq.meta.extensions.ddl.DDLDatabase.load(DDLDatabase.java:183)
at org.jooq.meta.extensions.ddl.DDLDatabase.lambda$export$0(DDLDatabase.java:156)
at org.jooq.FilePattern.load0(FilePattern.java:307)
at org.jooq.FilePattern.load(FilePattern.java:287)
at org.jooq.FilePattern.load(FilePattern.java:300)
at org.jooq.FilePattern.load(FilePattern.java:251)
at org.jooq.meta.extensions.ddl.DDLDatabase.export(DDLDatabase.java:156)
at org.jooq.meta.extensions.AbstractInterpretingDatabase.connection(AbstractInterpretingDatabase.java:100)
... 8 more
Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Function "coalesce" not found; SQL statement:
create view "view1" as select "coalesce"("sum"("table1"."col1"), 0) "sum" from "table1" [90022-200]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:576)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:429)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.command.Parser.readJavaFunction(Parser.java:3565)
at org.h2.command.Parser.readFunction(Parser.java:3770)
at org.h2.command.Parser.readTerm(Parser.java:4305)
at org.h2.command.Parser.readFactor(Parser.java:3343)
at org.h2.command.Parser.readSum(Parser.java:3330)
at org.h2.command.Parser.readConcat(Parser.java:3305)
at org.h2.command.Parser.readCondition(Parser.java:3108)
at org.h2.command.Parser.readExpression(Parser.java:3059)
at org.h2.command.Parser.readFunctionParameters(Parser.java:3778)
at org.h2.command.Parser.readFunction(Parser.java:3772)
at org.h2.command.Parser.readTerm(Parser.java:4305)
at org.h2.command.Parser.readFactor(Parser.java:3343)
at org.h2.command.Parser.readSum(Parser.java:3330)
at org.h2.command.Parser.readConcat(Parser.java:3305)
at org.h2.command.Parser.readCondition(Parser.java:3108)
at org.h2.command.Parser.readExpression(Parser.java:3059)
at org.h2.command.Parser.parseSelectExpressions(Parser.java:2931)
at org.h2.command.Parser.parseSelect(Parser.java:2952)
at org.h2.command.Parser.parseQuerySub(Parser.java:2817)
at org.h2.command.Parser.parseSelectUnion(Parser.java:2649)
at org.h2.command.Parser.parseQuery(Parser.java:2620)
at org.h2.command.Parser.parseCreateView(Parser.java:6950)
at org.h2.command.Parser.parseCreate(Parser.java:6223)
at org.h2.command.Parser.parsePrepared(Parser.java:903)
at org.h2.command.Parser.parse(Parser.java:843)
at org.h2.command.Parser.parse(Parser.java:815)
at org.h2.command.Parser.prepareCommand(Parser.java:738)
at org.h2.engine.Session.prepareLocal(Session.java:657)
at org.h2.engine.Session.prepareCommand(Session.java:595)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1235)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:212)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:201)
at org.jooq.tools.jdbc.DefaultStatement.execute(DefaultStatement.java:102)
at org.jooq.impl.SettingsEnabledPreparedStatement.execute(SettingsEnabledPreparedStatement.java:227)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:414)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:335)
... 16 more
> Task :generateJooq FAILED
Jooq Configuration:
jooq {
version = "3.15.5"
edition = JooqEdition.OSS
configurations {
main {
generationTool {
generator {
name = 'org.jooq.codegen.KotlinGenerator'
strategy {
name = 'org.jooq.codegen.DefaultGeneratorStrategy'
}
generate {
relations = true
deprecated = false
records = true
immutablePojos = true
fluentSetters = true
daos = false
pojosEqualsAndHashCode = true
javaTimeTypes = true
}
target {
packageName = 'de.project.service.jooq'
}
database {
name = 'org.jooq.meta.extensions.ddl.DDLDatabase'
properties {
property {
key = 'scripts'
value = 'src/main/resources/db/migration/*.sql'
}
property {
key = 'sort'
value = 'semantic'
}
property {
key = 'unqualifiedSchema'
value = 'none'
}
property {
key = 'defaultNameCase'
value = 'lower'
}
}
}
}
}
}
}
You probably have the following configuration set:
<property>
<key>defaultNameCase</key>
<value>lower</value>
</property>
In jOOQ 3.15, this transforms all identifiers to lower case and quotes them before handing the SQL statement to H2 behind the scenes for DDL simulation, in order to emulate e.g. PostgreSQL behaviour, where unquoted identifiers are lower case, not upper case as in many other RDBMS.
There's a bug in the current implementation, which also quotes built-in functions, not just user defined objects. See:
https://github.com/jOOQ/jOOQ/issues/9931 (general problem related to "system names")
https://github.com/jOOQ/jOOQ/issues/12752 (DDLDatabase specific problem)
The only workaround I can think of would be to turn off that property again, and manually quote all identifiers to be lower case. Alternatively, instead of using the DDLDatabase, you can always connect to an actual database instead, e.g. by using testcontainers. This will be much more robust in many ways, anyway, than the DDLDatabase
In any case, this is quite the frequent problem, so, I've fixed this for the upcoming jOOQ 3.16. The above setting will no longer quote "system names", which are well known identifiers of built-in functions
i'm using cassandra 2.1 and CQL 3.2.1 , i want to let user specify keyspace name ,replication strategy , replication factor from UI , and then pass these values to query to execute Insert CQL , but give me an syntax error , i try a lot but nothing go write >>
i'm create keyspace -> connect
and column family -> keyspaces
but insertion cause error
here is my code :
from cassandra.cluster import Cluster
class Connection():
def __init__ (self , ips , keyspace ,replication_strategy ,replication_factor):
self.keyspace=keyspace
self.ips =ips
self.replication_strategy=replication_strategy
self.replication_factor=replication_factor
cluster = Cluster([ips])
session = cluster.connect()
session.execute("CREATE keyspace IF NOT EXISTS connect with replication={ 'class' : 'SimpleStrategy', 'replication_factor' :1}")
session.execute("CREATE TABLE IF NOT EXISTS connect.keyspaces (id int primary key , keyspaces_name text, replication_strategy text, replication_factor int)")
session.execute("INSERT INTO connect.keyspaces(id , keyspaces_name , replication_strategy ,replication_factor ) VALUES (1 " +',' + self.keyspace + ',' + self.replication_strategy +',' + self.replication_factor + ")")
and the ERROR MESSAGE IS :
File "cassandra/cluster.py", line 3822, in cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:74332)
raise self._final_exception
SyntaxException: <Error from server: code=2000 [Syntax error in CQL query] message="line 1:125 no viable alternative at input ',' (...) VALUES (1 ,noon,[SimpleStrategy],...)">
This means there is a syntax error in the INSERT statement you're building. It might be easier to troubleshoot if you print the string query you've built.
Alternatively I would suggest parameterizing your query to let the driver do formatting:
http://datastax.github.io/python-driver/getting_started.html#passing-parameters-to-cql-queries
I am using spark-sql to connect to oracle databse and getting data as dataframes. I would like to write this retrieved data into avro file. While writing to avro I am seeing multiple issues, could you help us.
Here is the code -
val df = sqlContext.read.format("jdbc")
.options(Map( "driver"->"oracle.jdbc.driver.OracleDriver",
"url" -> "jdbc:oracle:thin:user/password#host/service"
, "numPartitions" -> "1", "dbtable"-> "
(Select * from schema.table WHERE STAGE_NUM <=39 and
guid='I284ba1f9cdba11dea82ab9f4ee295c21')"))
.load()
df.write.format("com.databricks.spark.avro").save("Outputfile")
Dependencies that are there in my project -
<dependency><br> <groupId>org.apache.spark</groupId><br> <artifactId>spark-sql_2.10</artifactId><br> <version>1.5.1</version><br></dependency><br><dependency><br> <groupId>com.databricks</groupId><br> <artifactId>spark-avro_2.10</artifactId><br> <version>2.0.1</version><br></dependency><br><dependency><br> <groupId>org.apache.avro</groupId><br> <artifactId>avro</artifactId><br> <version>1.7.7</version><br></dependency><br><dependency><br> <groupId>org.apache.avro</groupId><br> <artifactId>avro-mapred</artifactId><br> <version>1.7.7</version><br></dependency>
Here is the exception information -
java.lang.RuntimeException: com.databricks.spark.avro.DefaultSource does not allow create table as select
If I use - df.write.avro("headnotes"), I get the following exception.
java.lang.IllegalAccessError: tried to access class org.apache.avro.SchemaBuilder$FieldDefault from class com.databricks.spark.avro.SchemaConverters$$anonfun$convertStructToAvro$1
Are the below properties in hive-site.xml correct for Hive access to cassandra??
(I HAVE COPIED ENTIRE HIVE-DEFAULT.XML CONTENT BUT HAVE CHANGED ONLY THE BELOW PROPERTIES)
javax.jdo.option.ConnectionURL : cassandra://localhost:9160
javax.jdo.option.ConnectionDriverName:org.apache.cassandra.cql.jdbc.CassandraDriver
hive.stats.dbclass: jdbc:cassandra
hive.stats.jdbcdriver: org.apache.cassandra.cql.jdbc.CassandraDriver
hive.stats.dbconnectionstring: jdbc:cassandra:;databaseName=TempStatsStore;create=true
I am running 1-node Cassandra. But, later would make it a minimum 2 node cluster.
When I run the below table creation command I get an error:
CREATE EXTERNAL TABLE MyHiveTable
(m string, n string, o string, p string)
STORED BY 'org.apache.hadoop.hive.cassandra.cql3.CqlStorageHandler'
TBLPROPERTIES ( "cassandra.ks.name" = "cql3ks",
"cassandra.cf.name" = "test",
"cassandra.cql3.type" = "text, text, text, text");
Error:
FAILED: Error in metadata: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
NestedThrowables:
java.lang.reflect.InvocationTargetException
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
don't know about jdo settings but you could try this link which is far better option for integrating hive with cassandra -
https://github.com/milliondreams/hive/tree/cas-support-cql/cassandra-handler