"Method not supported" when creating table via Thrift Server JDBC in Spark 1.5 - jdbc

I have a instance of Spark 1.5 running a thrift server. My database manager (DBeaver) successfully connects to this thrift server. However, when I try to run the following piece of code:
CREATE TABLE test(
id int
)
I receive: DBCException: SQL Error: Method not supported
java.sql.SQLException: SQLException: Method not supported
The interesting thing is, the table is in fact created. When I try:
beeline> show tables;
+------------+--------------+--
| tableName | isTemporary |
+------------+--------------+--
| test | false |
+------------+--------------+--+
If I try to create a similar table from beeline, it is created without any error messages.
0: jdbc:hive2://localhost:10000> CREATE TABLE test02( id INT );
+---------+--+
| result |
+---------+--+
+---------+--+
The question is, how to create tables via JDBC without receiving this error message?

Related

Self hosting Supabase using Postgres

I am trying to self host Supabase with Postgersql (AWS RDS). I have got the db up and running.
I download their docker diretory. It's working fine with the default settings, but when I do it with my postgreSQL,
supabase-auth doesnt seem to work. It give me following error
supabase-auth | time="2022-07-15T14:09:51Z" level=fatal msg="running db migrations: Migrator: problem creating schema migrations: CREATE TABLE \"schema_migrations\" (\n\"version\" VARCHAR (14) NOT NULL\n);\nCREATE UNIQUE INDEX \"schema_migrations_version_idx\" ON \"schema_migrations\" (version);: ERROR: no schema has been selected to create in (SQLSTATE 3F000)"
supabase-auth | [POP] 2022/07/15 14:09:51 info - 2.6074 seconds
supabase-auth exited with code 1
Any help will be appreciated!
Thanks in advance.

Alter table in hive is not working for serde 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe' in Hive "Apache Hive (version 2.1.1-cdh6.3.4)"

Environment:
Apache Hive (version 1.1.0-cdh5.14.2)
I tried creating a table with below DDL.
create external table test1 (v_src_code string,d_extraction_date date) partitioned by (d_mis_date date) row format serde 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe' with serdeproperties ("field.delim"="~|") stored as textfile location '/hdfs_path/test1' tblproperties("serialization.null.format"="");
Then I alter this table by adding one extra column as below.
alter table test1 add columns(n_limit_id bigint);
This is working perfectly fine.
But recently our cluster got upgraded. The new environment is
Apache Hive (version 2.1.1-cdh6.3.4)
The same table is created in this new environment. When I do alter table I get below error.
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Error: type expected at the position 0 of '<derived from deserializer>:bigint' but '<' is found. (state=08S01,code=1)

Unable to obtain table lock - another Flyway instance may be running

I'm using integration of Spring Boot and Flyway (6.5.5) to run updates for CockroachDB cluster. When several instances of service are starting in the same time, all of them are trying to lock flyway_schema_history table to validate migrations. However, the following exception occurs:
2020-09-09 00:00:00.013 ERROR 1 --- [ main] o.s.boot.SpringApplication :
Application run failed org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]:
Invocation of init method failed; nested exception is org.flywaydb.core.api.FlywayException:
Unable to obtain table lock - another Flyway instance may be running
I could not find any config property to tweak this. Maybe someone faced with the same issue and solved it somehow?
Workaround: restart service.
After debugging the issue, it's appeared in very weird Flyway behaviour:
org.flywaydb.core.internal.database.cockroachdb.CockroachDBTable
CockroachDB-specific table.
Note that CockroachDB doesn't support table locks. We therefore use a row in the schema history as a lock indicator;
if another process ahs inserted such a row we wait (potentially indefinitely) for it to be removed before
carrying out a migration.
*/
So, in my case during applying migration, service was restarted and this pseudo lock record left forever.
Workaround was delete the "lock" manually:
installed_rank | version | description | type | script | checksum | installed_by | installed_on | execution_time | success
-----------------+----------------------------------+------------------------------------------+------+--------------------------------------------------+-------------+--------------------+----------------------------------+----------------+----------
-100 | d9ab17626a4d66a4d8a89fe9bdca98e9 | flyway-lock | | | 0 | | 2020-09-14 11:25:02.874838+00:00 | 0 | true
Hope, it will help someone.
The appropriate ticket has been created: https://github.com/flyway/flyway/issues/2932

Unable to create a table from Hive CLI - ERROR 23502

I seem to be getting the below exception when I try to create a table using Hive client.
create table if not exists test (id int, name string) comment 'test table';
11:15:32.016 [HiveServer2-Background-Pool: Thread-34] ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler - Retrying HMSHandler after 2000 ms (attempt 1 of 10) with error: javax.jdo.JDODataStoreException: Insert of object "org.apache.hadoop.hive.metastore.model.MTable#784fafc2" using statement "INSERT INTO TBLS (TBL_ID,CREATE_TIME,DB_ID,LAST_ACCESS_TIME,OWNER,RETENTION,SD_ID,TBL_NAME,TBL_TYPE,VIEW_EXPANDED_TEXT,VIEW_ORIGINAL_TEXT) VALUES (?,?,?,?,?,?,?,?,?,?,?)" failed : Column 'IS_REWRITE_ENABLED' cannot accept a NULL value.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:720)
Caused by: ERROR 23502: Column 'IS_REWRITE_ENABLED' cannot accept a NULL value.
at org.apache.derby.client.am.ClientStatement.completeExecute(Unknown Source)
at org.apache.derby.client.net.NetStatementReply.parseEXCSQLSTTreply(Unknown Source)
I did search but couldn't find a satisfactory resolution.
Here is my set up:
Hive 2.1.0
OS: Windows
Hadoop: 2.9.2
Derby: 10.14.2.0
What am i missing?
Thanks.
Seems like a compatibility issue with derby. I moved back to a earlier version of derby 10.2.1.1 and the issue went away.

Cannot create Hive external table using jdbcStorageHandler

I am running a small cluster in Amazone EMR in order to play with Apache Hive 2.3.5. It is my understanding that Apache Hive can import data from a remote database and have the cluster to run queries. I was following an example that is provided in Apache Hive web documentation (https://cwiki.apache.org/confluence/display/Hive/JdbcStorageHandler) and created the following code:
CREATE EXTERNAL TABLE hive_table
(
col1 int,
col2 string,
col3 date
)
STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
TBLPROPERTIES (
'hive.sql.database.type'='POSTGRES',
'hive.sql.jdbc.driver'='org.postgresql.Driver',
'hive.sql.jdbc.url'='jdbc:postgresql://<url>/<dbname>',
'hive.sql.dbcp.username'='<username>',
'hive.sql.dbcp.password'='<password>',
'hive.sql.table'='<dbtable>',
'hive.sql.dbcp.maxActive'='1'
);
But I get the following error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException java.lang.IllegalArgumentException: Property hive.sql.query is required.)
According to the documentation, I need to specify either “hive.sql.table” or “hive.sql.query” to tell how to get data from jdbc database. But if I replace hive.sql.table with hive.sql.query I get the following error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException java.lang.IllegalArgumentException: No enum constant org.apache.hive.storage.jdbc.conf.DatabaseType.POSTGRES)
I tried looking in the web for a solution and it doesn't look like anyone experience the same issues that I am having. Do I need to modify a config file or am I missing something critical in my code?
I think you are using a version of the jar which doesn't support POSTGRES.
Download the latest jar from this link:
http://repo1.maven.org/maven2/org/apache/hive/hive-jdbc-handler/3.1.2/hive-jdbc-handler-3.1.2.jar
Put this downloaded jar into a hdfs location.
Run hive normally.
Run command: add jar ${HDFS_PATH_TO_DOWNLOADED_JAR}
Run your create table command

Resources