SQLExceptionHelper : Input parameter not set, Index : 0 - spring-boot

I am using spring-data-rest & hibernate to expose a table "File(Id, Name)" from a SAP IQ(Sybase IQ) database. The below error occurs when I do a "GET" on the "File" table using "curl http://localhost:8080/files/1".
2022-07-20 20:01:03 WARN [http-nio-8081-exec-1] o.h.e.jdbc.spi.SqlExceptionHelper - SQL Error: 0, SQLState: JZ0SA
2022-07-20 20:01:03 ERROR [http-nio-8081-exec-1] o.h.e.jdbc.spi.SqlExceptionHelper - JZ0SA: Prepared Statement: Input parameter not set, index: 0.
2022-07-20 20:01:03 INFO [http-nio-8081-exec-1] o.h.e.i.DefaultLoadEventListener - HHH000327: Error performing load command
org.hibernate.exception.GenericJDBCException: could not extract ResultSet
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:67)
I have the same table in "Sybase 16" but it is working perfectly with the same code base.
Anyone faced similar issue?
Thanks in advance

Related

How can I use a function say digits with Spring JPA and native query and DB2

I want to be able to use this function and them pass the parameter. But I get this error, it is almost that spring jpa or java prepared statements are not allowing me to do this.
The code looks like the following in the logs.
Hibernate: UPDATE DB2PROD.GLOBAL_TABLE
SET CONTRACT_ID = DIGITS( ? )
WHERE GLOBAL_ID = ?
#Query(value = "UPDATE DB2PROD.GLOBAL_TABLE \n" +
" SET CONTRACT_ID = DIGITS( :indicator ) \n" +
" WHERE GLOBAL_ID = :id ", nativeQuery = true)
int update(final Integer indicator, final String id);
And the error:
2022-03-11 15:54:25.982 WARN 95241 --- [ scheduling-1]
o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: -418, SQLState:
42610 2022-03-11 15:54:25.982 ERROR 95241 --- [ scheduling-1]
o.h.engine.jdbc.spi.SqlExceptionHelper : DB2 SQL Error:
SQLCODE=-418, SQLSTATE=42610, SQLERRMC=null, DRIVER=4.25.13 2022-03-11
15:54:25.982 WARN 95241 --- [ scheduling-1]
o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: -516, SQLState:
26501 2022-03-11 15:54:25.983 ERROR 95241 --- [ scheduling-1]
o.h.engine.jdbc.spi.SqlExceptionHelper : DB2 SQL Error:
SQLCODE=-516, SQLSTATE=26501, SQLERRMC=null, DRIVER=4.25.13 2022-03-11
15:54:25.983 WARN 95241 --- [ scheduling-1]
o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: -518, SQLState:
07003 2022-03-11 15:54:25.983 ERROR 95241 --- [ scheduling-1]
o.h.engine.jdbc.spi.SqlExceptionHelper : DB2 SQL Error:
SQLCODE=-518, SQLSTATE=07003, SQLERRMC=null, DRIVER=4.25.13 2022-03-11
15:54:26.017 ERROR 95241 --- [ scheduling-1]
c.p.p.m.contacts.service.ContactService : Error on save contact
Db2 can't determine a data type of the parameter in the DIGITS (?) expression.
You must define it explicitly with CAST like:
DIGITS (CAST (? AS INT))
or
DIGITS (CAST (? AS DEC (5)))
etc.

apache nifi Stateless - not able to set parm of controller service (DBCPConnectionPool 1.10.0)

I am following the NiFi 1.10 stateless guildeline to create a simple process group of executing a sql in mysql db. I have put necessary parm of db controller service to parameter context.
it works well in nifi canvas. Then i add it to registry and prepare a json parm file: stateless-simpledb.json
{
"registryUrl": "http://localhost:18080",
"bucketId": "cac8f127-e328-45c1-a4cb-0e03dc837ceb",
"flowId": "cc2753f2-78f3-4449-a2fd-343dfeaafe15",
"flowVersion": "3",
"parameters": {
"lastIngestId" : "20000",
"mysql-jdbc-driver-name" : "com.mysql.jdbc.Driver",
"db-user" : "root",
"db-password" : "password",
"db-con-url" : "jdbc:mysql://localhost:3306/mms",
"jdbc-jar-path" : "/program/jdbc/mysql-connector-java.jar"
}
}
and run the one-off command:
/program/nifi/bin/nifi.sh stateless RunFromRegistry Once --file /app/poc/nifi-stateless/conf/stateless-simpledb.json
It raise error:
=== FlowFileRepository Type ===
org.apache.nifi.controller.repository.RocksDBFlowFileRepository
org.apache.nifi:nifi-framework-nar:1.10.0 || /program/nifi-1.10.0/work/stateless-nars/nifi-framework-nar-1.10.0.nar-unpacked
org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
org.apache.nifi:nifi-framework-nar:1.10.0 || /program/nifi-1.10.0/work/stateless-nars/nifi-framework-nar-1.10.0.nar-unpacked
org.apache.nifi.controller.repository.VolatileFlowFileRepository
org.apache.nifi:nifi-framework-nar:1.10.0 || /program/nifi-1.10.0/work/stateless-nars/nifi-framework-nar-1.10.0.nar-unpacked
=== End FlowFileRepository types ===
23:32:32.626 [main] INFO org.apache.nifi.stateless.bootstrap.ExtensionDiscovery - Successfully discovered extensions in 4411 milliseconds
23:32:32.633 [main] DEBUG org.apache.nifi.stateless.core.ComponentFactory - Setting context class loader to org.apache.nifi.nar.InstanceClassLoader#50fa5938 (parent = org.apache.nifi.nar.NarClassLoader[/program/nifi-1.10.0/work/stateless-nars/nifi-dbcp-service-nar-1.10.0.nar-unpacked]) to create org.apache.nifi.dbcp.DBCPConnectionPool
23:32:32.647 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input #{jdbc-jar-path} found 1 Parameter references: [org.apache.nifi.parameter.StandardParameterReference#2d3eecda]
23:32:32.650 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input /program/jdbc/mysql-connector-java.jar found 0 Parameter references: []
23:32:32.651 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input 500 millis found 0 Parameter references: []
23:32:32.651 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input 8 found 0 Parameter references: []
23:32:32.651 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input 0 found 0 Parameter references: []
23:32:32.651 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input 8 found 0 Parameter references: []
23:32:32.651 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input -1 found 0 Parameter references: []
23:32:32.651 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input -1 found 0 Parameter references: []
23:32:32.651 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input 30 mins found 0 Parameter references: []
23:32:32.651 [main] DEBUG org.apache.nifi.parameter.ExpressionLanguageAwareParameterParser - For input -1 found 0 Parameter references: []
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.bootstrap.RunStatelessNiFi.main(RunStatelessNiFi.java:69)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.StatelessNiFi.main(StatelessNiFi.java:103)
... 5 more
Caused by: java.lang.RuntimeException: Failed to enable Controller Service {id=691ecc97-ff46-3a5e-8aad-37dc568bc247, name=MYSQL-MMS-stateless-test, type=class org.apache.nifi.dbcp.DBCPConnectionPool} because validation failed: ['Database Connection URL' is invalid because Database Connection URL is required, 'Database Driver Class Name' is invalid because Database Driver Class Name is required]
at org.apache.nifi.stateless.core.StatelessControllerServiceLookup.enableControllerServices(StatelessControllerServiceLookup.java:133)
at org.apache.nifi.stateless.core.StatelessFlow.<init>(StatelessFlow.java:153)
at org.apache.nifi.stateless.core.StatelessFlow.createAndEnqueueFromJSON(StatelessFlow.java:469)
at org.apache.nifi.stateless.runtimes.Program.runLocal(Program.java:133)
at org.apache.nifi.stateless.runtimes.Program.launch(Program.java:67)
... 10 more
Seems the apache nifi stateless function failed to set controller service even it's in "process group" scope.
Would anyone has any advice?
As mentioned in the comments, this appears to be a known problem with the validation of controller services.
This can be avoided by using Nifi 1.12 and above as it got fixed in the following jira: https://issues.apache.org/jira/plugins/servlet/mobile#issue/NIFI-7380
Though I am not entirely sure of this, it may also be possible that this simply indicates that your controller service is not configured correctly. This would be worth double checking.

Spring JPA Oracle SUBSTRING

I'm using a custom #Query with Spring JPA like this:
#Query(value = "select ua from USER_ACCOUNT ua where SUBSTRING(ssn, -4, 4) = :ssn and last_nm = :lastName and birth_dt = :dateOfBirth")
UserAccount findByLastFourOfSSNAndLastNameAndDateOfBirth(#Param("ssn") String lastFourOfSSN, #Param("lastName") String lastName, #Param("dateOfBirth") Date dateOfBirth);
It works without any problems when I run it locally, but as soon as I deploy it out to the development environment I get:
[2017-12-19T23:02:59,048] WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper SQL Error: 904, SQLState: 42000
[2017-12-19T23:02:59,048] ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper ORA-00904: "SUBSTRING": invalid identifier
[2017-12-19T23:02:59,049] ERROR org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[User Store] Servlet.service() for servlet [User Store] in context with path [] threw exception [org.springframework.dao.InvalidDataAccessResourceUsageException: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet] with root cause
java.sql.SQLSyntaxErrorException: ORA-00904: "SUBSTRING": invalid identifier
The development environment is pretty much identical to my local. Same database, same version of Tomcat. Am I just missing something obvious here?
EDIT: SQL being generated:
select useraccoun0_.GUID as GUID1_5_, useraccoun0_.CREATED_BY as CREATED_BY2_5_, useraccoun0_.CREATED_DT as CREATED_DT3_5_, useraccoun0_.DB_NOTE as DB_NOTE4_5_, useraccoun0_.LAST_CHANGED_BY as LAST_CHANGED_BY5_5_, useraccoun0_.LAST_CHANGED_DT as LAST_CHANGED_DT6_5_, useraccoun0_.ACCOUNT_RECOVERY_EMAIL as ACCOUNT_RECOVERY_E7_5_, useraccoun0_.ACCOUNT_RECOVERY_PHONE as ACCOUNT_RECOVERY_P8_5_, useraccoun0_.BIRTH_DT as BIRTH_DT9_5_, useraccoun0_.DISABLED_IND as DISABLED_IND10_5_, useraccoun0_.DISABLED_DT as DISABLED_DT11_5_, useraccoun0_.DISABLED_REASON as DISABLED_REASON12_5_, useraccoun0_.email as email13_5_, useraccoun0_.FAILED_LOGIN_CNT as FAILED_LOGIN_CNT14_5_, useraccoun0_.FAILED_PASSWORD_RESET_DT as FAILED_PASSWORD_R15_5_, useraccoun0_.FIRST_NM as FIRST_NM16_5_, useraccoun0_.LAST_FAILED_LOGIN_DT as LAST_FAILED_LOGIN17_5_, useraccoun0_.LAST_LOGIN_DT as LAST_LOGIN_DT18_5_, useraccoun0_.LAST_NM as LAST_NM19_5_, useraccoun0_.LOCKED_IND as LOCKED_IND20_5_, useraccoun0_.LOCKED_CNT as LOCKED_CNT21_5_, useraccoun0_.LOCKED_DT as LOCKED_DT22_5_, useraccoun0_.USER_PASSWORD as USER_PASSWORD23_5_, useraccoun0_.PASSWORD_CHANGED_DT as PASSWORD_CHANGED_24_5_, useraccoun0_.PHONE_NBR as PHONE_NBR25_5_, useraccoun0_.REQUIRE_PASSWORD_RESET_IND as REQUIRE_PASSWORD_26_5_, useraccoun0_.SSN as SSN27_5_, useraccoun0_.USER_NM as USER_NM28_5_ from USER_ACCOUNT useraccoun0_ where substr(useraccoun0_.SSN, -4, 4)=? and last_nm=? and birth_dt=?

Using H2 Debug Console occurred exception

I have seted -DIGNITE_H2_DEBUG_CONSOLE in JVM parameters.
Then local H2 console started.
but exception occurred:
General error: "java.lang.UnsupportedOperationException"; SQL statement:
SELECT UPPER(VALUE) FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME=? [50000-191] HY000/50000
org.h2.jdbc.JdbcSQLException: General error: "java.lang.UnsupportedOperationException"; SQL statement:
SELECT UPPER(VALUE) FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME=? [50000-191]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:295)
at org.h2.command.Command.executeQuery(Command.java:213)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:110)
at org.h2.bnf.context.DbContents.readContents(DbContents.java:136)
...
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException
at org.apache.ignite.internal.processors.query.h2.opt.GridH2Row.setKey(GridH2Row.java:101)
at org.h2.table.MetaTable.add(MetaTable.java:1982)
at org.h2.table.MetaTable.generateRows(MetaTable.java:940)
at org.h2.index.MetaIndex.find(MetaIndex.java:50)
at org.h2.index.BaseIndex.find(BaseIndex.java:132)
at org.h2.index.IndexCursor.find(IndexCursor.java:169)
at org.h2.table.TableFilter.next(TableFilter.java:460)
at org.h2.command.dml.Select.queryFlat(Select.java:541)
...
at org.h2.command.CommandContainer.query(CommandContainer.java:110)
at org.h2.command.Command.executeQuery(Command.java:201)
... 8 more
This is a known issue: https://issues.apache.org/jira/browse/IGNITE-3685
It is fixed in upcoming Apache Ignite 1.8.

Pig filter fails due to unexpected data

I am running Cassandra and have about 20k records in it to play with. I am trying to run a filter in pig on this data but am getting the following message back:
2015-07-23 13:02:23,559 [Thread-4] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
java.lang.RuntimeException: com.datastax.driver.core.exceptions.InvalidQueryException: Expected 8 or 0 byte long (1)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initNextRecordReader(PigRecordReader.java:260)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:205)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Expected 8 or 0 byte long (1)
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:263)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:44)
at org.apache.cassandra.hadoop.cql3.CqlRecordReader$RowIterator.(CqlRecordReader.java:259)
at org.apache.cassandra.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:151)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initNextRecordReader(PigRecordReader.java:256)
... 7 more
You would think this is an obvious error, and believe me there are a ton of results on google for this. It's clear that some piece of my data isn't conforming to the expected type of a given column. What I don't understand is 1.) why this is happening, and 2.) how to debug it. If I try to insert invalid data into Cassandra from my nodejs app, it will throw this kind of error if my data type doesn't match the columns data type, which means that this shouldn't be possible? I've read that data validation using UTF8 is wonky and that setting a different kind of validation is the answer, but I don't know how to do that. Here are my steps to reproduce:
grunt> define CqlNativeStorage org.apache.cassandra.hadoop.pig.CqlNativeStorage();
grunt> test = load 'cql://blah/blahblah' USING CqlNativeStorage();
grunt> describe test;
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - Found ksDef name: blah
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - partition keys: ["ad_id"]
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - cluster keys: []
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - row key validator: org.apache.cassandra.db.marshal.UTF8Type
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - cluster key validator: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type)
blahblah: {ad_id: chararray,address: chararray,city: chararray,date_created: long,date_listed: long,fireplace: bytearray,furnished: bytearray,garage: bytearray,neighbourhood: chararray,num_bathrooms: int,num_bedrooms: int,pet_friendly: bytearray,postal_code: chararray,price: double,province: chararray,square_feet: int,url: chararray,utilities_included: bytearray}
grunt> query1 = FILTER blahblah BY city == 'New York';
grunt> dump query1;
Then it runs for awhile and dumps out tons of logs and the error appears.
Discovered my problem: the pig partioner did not match CQL3, and therefore the data was being parsed incorrectly. Previously the environment variable was PIG_PARTITIONER=org.apache.cassandra.dht.RandomPartitioner. After I changed it to PIG_PARTITIONER=org.apache.cassandra.dht.Murmur3Partitioner it started working.

Resources