SQL Minus and lower/upper dont work together in Jdbc - spring

i got a HSQLDB 2.2.9 and the following statement:
(SELECT lower(MyCol) FROM MyTable WHERE ID = ?)
MINUS
(SELECT lower(MyCol) FROM MyTable WHERE ID = ?)
And it works in my Squirrel. But when i execute this in my program which uses Jdbc i get the following exception:
Exception in thread "main" org.springframework.dao.TransientDataAccessResourceException: PreparedStatementCallback; SQL [(SELECT lower(MyCol) FROM MyTable WHERE ID = ? ) MINUS (SELECT lower(MyCol) FROM MyTable WHERE ID_CENTER = ?)]; Column not found: MyCol; nested exception is java.sql.SQLException: Column not found: MyCol
If i delete the lower() that statement works but its case sensitive which i want to eliminate here.
Can please someone tell me why i get this error and how to fix it?

This exception is not thrown by HSQLDB 2.2.9. If the column could not be found, the exception message would be in this form:
user lacks privilege or object not found: MYCOL
Please check your Spring data source settings.

Related

Spring SimpleJdbcInsert fails with 'INSERT has more target columns than expressions'

I am using a SimpleJdbcInsert to insert rows into a PostgreSQL database. However, I get an the following error:
Caused by: org.postgresql.util.PSQLException: ERROR: INSERT has more
target columns than expressions.
org.springframework.jdbc.UncategorizedSQLException:
PreparedStatementCallback; uncategorized SQLException for SQL [INSERT
INTO product (product_id,product_name,product_code,in_
stock,product_category) VALUES(?)]; SQL state [25P02]; error code [0];
ERROR: current transaction is aborted, commands ignored until end of
transaction block; nested exception is
org.postgresql.util.PSQLException: ERROR: current transaction is
aborted, commands ignored until end of transaction block
The number columns is exactly the same as the number of values I am trying to insert when I print out the MapSqlParameterSource object shown below:
Parameters Names ::
[
product_id,
product_name,
product_code,
in_ stock,
product_category
]
Parameters Values :: [{
product_id=1518,
product_name=Sofa,
product_code=150,
in_stock=true,
product_category=null,
}]
The product_id is the primary key and it is not null. Could the problem be because I am not using an auto-generated primary key? I still do not understand why that would be a problem.
The columns shown in the error message are precisely the same as the columns in the parameter list I'm printing. The values also tally with the number of columns as well, so I'm really baffled why PostgreSQL is giving this error. Please help!
I was able to solve it with a different solution to using Spring JDBC.

Error message Multi Part Identifier could not be bound

I am writing a SQL query to use in a Boyum validation that will flag and BP Master Data NAMES that are LIKE
Here is the query I have written
IF OCRD.Cardname IN (Select OCRD.Cardname from OCRD WHERE OCRD.Cardname
LIKE '%'+Cardname+'%')
BEGIN
SELECT 'Duplicate'
FOR BROWSE
END
Here is the error message I have received
[Microsoft][ODBC Driver 13 for SQL Server][SQL Server]The multi-part identifier "OCRD.Cardname" could not be bound.
2). [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Statement(s) could not be prepared.
)
That is invalid SQL you provided. You can't reference a table column like that
IF EXISTS (Select OCRD.Cardname from OCRD WHERE OCRD.Cardname LIKE '%'+Cardname+'%')
BEGIN
SELECT 'Duplicate'
FOR BROWSE
END

Spark SQL throwing error "java.lang.UnsupportedOperationException: Unknown field type: void"

I am getting below error in Spark(1.6) SQL while creating a table with column value default as NULL. Ex: create table test as select column_a, NULL as column_b from test_temp;
The same thing works in Hive and creates the column with data type "void".
I am using empty string instead of NULL to avoid the exception and new column getting string data type.
Is there any better way to insert null values in hive table using spark sql ?
2017-12-26 07:27:59 ERROR StandardImsLogger$:177 - org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.UnsupportedOperationException: Unknown field type: void
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:789)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:746)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$createTable$1.apply$mcV$sp(ClientWrapper.scala:428)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$createTable$1.apply(ClientWrapper.scala:426)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$createTable$1.apply(ClientWrapper.scala:426)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:293)
at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:239)
at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:238)
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:281)
at org.apache.spark.sql.hive.client.ClientWrapper.createTable(ClientWrapper.scala:426)
at org.apache.spark.sql.hive.execution.CreateTableAsSelect.metastoreRelation$lzycompute$1(CreateTableAsSelect.scala:72)
at org.apache.spark.sql.hive.execution.CreateTableAsSelect.metastoreRelation$1(CreateTableAsSelect.scala:47)
at org.apache.spark.sql.hive.execution.CreateTableAsSelect.run(CreateTableAsSelect.scala:89)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:56)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:56)
at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:153)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:829)
I couldn't find much information regarding the datatype void but it looks like it is somewhat equivalent to the Any datatype we have in Scala.
The table at the end of this page explains that a void can be cast to any other data type.
Here are some JIRA issues that are kinda similar to the problem you are facing
HIVE-2901
HIVE-747
So, as mentioned in the comment, instead of NULL you can cast it to any of the implicit data types.
select cast(NULL as string) as column_b
I started to get a similar issue. I build the code down to an example
WITH DATA
AS (
SELECT 1 ISSUE_ID,
DATE(NULL) DueDate,
MAKE_DATE(2000,01,01) DDate
UNION ALL
SELECT 1 ISSUE_ID,
MAKE_DATE(2000,01,01),
MAKE_DATE(2000,01,02)
)
SELECT ISNOTNULL(lag(IT.DueDate, 1) OVER (PARTITION by IT.ISSUE_ID ORDER BY IT.DDate ))
AND ISNULL(IT.DueDate)
FROM DATA IT

Oracle Mybatis insert all

I am trying to do a mybatis batch insert with Oracle.
I tried the insert all on my sql developer it works, but with Mybatis here, it complains.
How could I fix this?
<insert
id="insertBatch"
parameterType="java.util.List"
keyProperty="id"
keyColumn="COMMENT_ID"
useGeneratedKeys="true">
INSERT ALL
<foreach collection="list" item="comment" index="index">
INTO COMMENT (value1, value2)
VALUES (#{comment.value1}, #{comment.value2})
</foreach>
SELECT *
FROM dual
</insert>
And I got error like this
org.springframework.jdbc.BadSqlGrammarException:
Error updating database. Cause: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended
The error may involve xxxx-Inline
The error occurred while setting parameters
SQL: INSERT ALL INTO COMMENT ( VALUE1, VALUE2) VALUES ( ?, ? ) INTO COMMENT ( VALUE1, VALUE2 ) VALUES ( ?, ? ) SELECT * FROM dual;
Cause: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended; bad SQL grammar []; nested exception is java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended

SELECT a table from oracle data dictionary

I am new to SQL and recently installed Oracle 11g. I read the post here on selecting all tables from user_tables. I'm trying to select a specific table and following some of the suggestions in the post does not appear to work.
The following executes fine and returns all tables available to me including a table named faculty_t:
select * from user_tables;
select * from dba_tables;
select * from all_tables;
desc faculty_t;
But I get error when I do the following:
select * from user_tables where table_name = FACULTY_T;
The first set of statements confirm that I do have a table named faculty_t. However, trying to select this table from user_tables, all_tables, or dba_tables does not appear to work for me right now. The error message reads something like:
ORA-00904: "FACULTY_T": invalid identifier
00904. 00000 - "%s: invalid identifier"
*Cause:
*Action:
Error at Line: 208 Column: 8
Any thoughts? Thanks!
String literals in SQL are wrapped in '. So:
select * from user_tables where table_name = 'FACULTY_T';
When you did a desc faculty_t, the SQL engine knew that a table name was expected at that spot (the syntax expects a table name there). But in your select query, sql is just looking for the value of a column that happens to have a string data type, so you need to use the ' for a string literal.

Resources