I used oracle in my application.
I have problem using this code :
import org.hibernate.Hibernate;
import org.hibernate.SQLQuery;
import org.hibernate.Session;
import org.hibernate.Query;
....
....
.....
public void insertGR(String id,String num) {
String query = "execute md_pkg.insert_Gr("+id+"," + num + ")";
SQLQuery sqlQuery = this.getSession().createSQLQuery(query);
sqlQuery.executeUpdate();
}
in jboss consol I have this error :
SQL Error: 900, SQLState: 42000
ORA-00900: invalid SQL statement
org.hibernate.exception.SQLGrammarException: could not execute native bulk manipulation query
at
but when I used sqldevelopper I do not have any problems
execute execute md_pkg.insert_Gr(9,25)
ORA-900: Invalid SQL statement should give you enough clues as to what's going on/wrong. Your statement to be executed is not SQL, it is an attempt to call a PL/SQL stored procedure. So, instead of
"execute md_pkg.insert_Gr("+id+"," + num + ")"
you should (maybe! I don't know Hibernate at all) rather
"call md_pkg.insert_Gr("+id+"," + num + ")"
or
"begin md_pkg.insert_Gr("+id+"," + num + "); end;"
and instead of using the SQLQuery class try and find something capable of executing stored procedures or anonymous PL/SQL blocks.
PS: I don't know Hibernate at all, so I can't help you finding the correct class.
Related
#Query(nativeQuery = true, value = "SELECT *\n" +
"FROM TDED_VISITS v \n" +
"WHERE v.v_collected_data.assigneeId = ?")
List<Visit> findAllByAssigneeId(String assigneeId);
The above code is used in the repository, the purpose of the query is to return all "visits" that have a matching value with the value provided. This value is a single value from inside a JSON object in a CLOB in an oracle database such as below.
{"visitId" : 1, "assigneeId" : "agr512"}
The model is made with the associated field as below.
#Column(name = "V_COLLECTED_DATA")
#Lob
private String visitJsonString;
The errors in the logs are as below
Column "V.V_COLLECTED_DATA.ASSIGNEEID" not found; SQL statement:
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.InvalidDataAccessResourceUsageException: could not prepare statement; SQL [SELECT *
FROM TDED_VISITS v
WHERE v.v_collected_data.assigneeId = ?]; nested exception is org.hibernate.exception.SQLGrammarException: could not prepare statement
Have also tried the below SQL query but given the error that the function JSON_VALUE was not found.
#Query(nativeQuery = true, value = "SELECT *\n" +
"FROM TDED_VISITS v \n" +
"WHERE JSON_VALUE(V_COLLECTED_DATA, '$.assigneeId') = ?")
Caused by: org.springframework.orm.jpa.JpaSystemException: could not prepare statement; nested exception is org.hibernate.exception.GenericJDBCException: could not prepare statement
Have spent a few days looking for similar problems for other people, but to no avail. Appreciate if anyone can help, thanks!
H2 does not support accessing JSON attributes. See here for a list of supported functions: https://www.h2database.com/html/functions.html
I am referring to this documentation. http://www-01.ibm.com/support/docview.wss?uid=swg21981328. As per the article if we use executeBatch method then inserts will be faster (The Netezza JDBC driver may detect a batch insert, and under the covers convert this to an external table load and external table load will be faster). I had to execute millions of insert statements and i am getting only a speed of 500 records per minute per connection max. Is there any better way to load data faster to netezza via jdbc connection? I am using spark and jdbc connection to insert the records.Why external table via loading is not happening even when i am executing in batches. Given below is the spark code i am using,
Dataset<String> insertQueryDataSet.foreachPartition( partition -> {
Connection conn = NetezzaConnector.getSingletonConnection(url, userName, pwd);
conn.setAutoCommit(false);
int commitBatchCount = 0;
int insertBatchCount = 0;
Statement statement = conn.createStatement();
//PreparedStatement preparedStmt = null;
while(partition.hasNext()){
insertBatchCount++;
//preparedStmt = conn.prepareStatement(partition.next());
statement.addBatch(partition.next());
//statement.addBatch(partition.next());
commitBatchCount++;
if(insertBatchCount % 10000 == 0){
LOGGER.info("Before executeBatch.");
int[] execCount = statement.executeBatch();
LOGGER.info("After execCount." + execCount.length);
LOGGER.info("Before commit.");
conn.commit();
LOGGER.info("After commit.");
}
}
//execute remaining statements
statement.executeBatch();
int[] execCount = statement.executeBatch();
LOGGER.info("After execCount." + execCount.length);
conn.commit();
conn.close();
});
I tried this approach(batch insert) but found very slow,
So I put all data in CSV & do external table load for each csv.
InsertReq="Insert into "+ tablename + " select * from external '"+ filepath + "' using (maxerrors 0, delimiter ',' unase 2000 encoding 'internal' remotesource 'jdbc' escapechar '\' )";
Jdbctemplate.execute(InsertReq);
Since I was using java so JDBC as source & note that csv file path is in single quotes .
Hope this helps.
If you find better than this approach, don't forget to post. :)
We have a very simple Java code which executes an Oracle stored procedure using CallableStatement API. An Oracle stored procedure doesn't have an OUTPUT parameter defined, however in the Java Code we are trying to register an OUT parameter and retrieve it :
cs = cnDBCon.prepareCall("{call "+strStoredProcedureName+ "}");
cs.registerOutParameter(1, Types.NUMERIC);
cs.execute();
int isOk = cs.getInt(1);
According to Java API ( https://docs.oracle.com/javase/8/docs/api/java/sql/CallableStatement.html#getInt-int- ) when a return value is SQL NULL, the result is 0.
It worked perfectly fine when we were running on Java7/WebLogic 12.1.3. Since we switched to Java8/WebLogic 12.2.1 we started to get the error :
java.sql.SQLException: Missing defines
at oracle.jdbc.driver.Accessor.isNull(Accessor.java:744)
at oracle.jdbc.driver.NumberCommonAccessor.getInt(NumberCommonAccessor.java:73)
at oracle.jdbc.driver.OracleCallableStatement.getInt(OracleCallableStatement.java:1815)
at oracle.jdbc.driver.OracleCallableStatementWrapper.getInt(OracleCallableStatementWrapper.java:780)
at weblogic.jdbc.wrapper.CallableStatement_oracle_jdbc_driver_OracleCallableStatementWrapper.getInt(Unknown Source)
The obvious choice is to update the stored procedure or java code, but I am wondering about the reason the behavior changed. Is it a more strict JDBC driver?
I am new to SoapUI framework. I am trying to use soapUI framework for testing REST API. While testing for REST API, I need to verify data from backend database as well like Hive and Cassandra.
I could do the setup for SoapUI and could test a query on Cassandra using groovy script that is provided SoapUI framework. But when I searched for connecting to hive using SoapUI, I couldn't find any reference for that. Also on there sites, JDBC drivers are not provided but hive is not mentioned there.
So is there any option to connect to hive from SoapUI framework?
Should I think about using Hive JDBC driver from SoapUI?
Thanks for your help!
I believe you should be able to use it for different data bases using following ways:
JDBC test step
Groovy Script (you should be able to use almost java code)
Either of the ways, copy the drivers/libraries into SOAPUI_HOME/bin/ext directory and restart SoapUI
Here is the link for client code (in Java) to connect to Hive.
Sample connection code from above link(so should be able to use in groovy) :
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/default", "", "");
Statement stmt = con.createStatement();
String tableName = "testHiveDriverTable";
stmt.executeQuery("drop table " + tableName);
ResultSet res = stmt.executeQuery("create table " + tableName + " (key int, value string)");
// show tables
String sql = "show tables '" + tableName + "'";
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
Why does not named or positional query parameters work with NHibernate in my case?
Consider the following statements to be true:
On Oracle database X and Y version 11.2.0.3.0 the role "MyRole" exists
identified by "MyPassword" and is granted to the user I am connected
as.
Here is some code:
public void SetRole(string roleName, string rolePassword)
{
if (HasRoleBeenSet) return;
try
{
session.CreateSQLQuery("SET ROLE ? IDENTIFIED BY ?")
.SetString(0, roleName)
.SetString(1, rolePassword)
.ExecuteUpdate();
HasRoleBeenSet = true;
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
SetRole("MyRole", "MyPassword");
Throws the following Exception:
NHibernate.Exceptions.GenericADOException:
could not execute native bulk manipulation query: SET ROLE ? IDENTIFIED BY ?
[SQL: SET ROLE :p0 IDENTIFIED BY :p1] --->
System.Data.OracleClient.OracleException: ORA-01937: missing or invalid role name
When i use SQLMonitor included in the Toad suite, the SQL sent to the database looks like this SET ROLE ? IDENTIFIED BY ? with the error Error occurred: [1937] (ORA-01937: missing or invalid role name) showing under.
When I look at FNH's own generated queries with parameters they look like this:
SchemaName.errorHandler.logError(:v0);
:1=['The error message']
But thats not the case when I manually create the query with CreateSQLQuery()
okay the next code sample is this:
...
session.CreateSQLQuery("SET ROLE :roleName IDENTIFIED BY :rolePassword")
.SetString("roleName", roleName)
.SetString("rolePassword", rolePassword)
.ExecuteUpdate();
...
Which outputs the following error (The same error):
NHibernate.Exceptions.GenericADOException:
could not execute native bulk manipulation query: SET ROLE :roleName IDENTIFIED BY :rolePassword
[SQL: SET ROLE :p0 IDENTIFIED BY :p1] --->
System.Data.OracleClient.OracleException: ORA-01937: missing or invalid role name
Third code sample:
...
session.CreateSQLQuery(string.Format("SET ROLE {0} IDENTIFIED BY {1}",
roleName,
rolePassword))
.ExecuteUpdate();
...
On Oracle Database X this works wonders, on Oracle Database Y this does not work so well and gives me this error:
NHibernate.Exceptions.GenericADOException:
could not execute native bulk manipulation query: SET ROLE MyRole IDENTIFIED BY MyPassword
[SQL: SET ROLE MyRole IDENTIFIED BY MyPassword] --->
System.Data.OracleClient.OracleException: ORA-00933: SQL command not properly ended
I tried adding a semicolon ; to the end of the statement but that gives invalid character error.
If I add double quotes around the password like this it suddenly works for Oracle Database Y also
...
session.CreateSQLQuery(string.Format("SET ROLE {0} IDENTIFIED BY \"{1}\"",
roleName,
rolePassword))
.ExecuteUpdate();
...
The problem is this is not a very good solution, as FNH now spills out the password in the exception which gets logged. I have no idea whats the problem here, there is no clean question here because i don't know what to ask other then scream for help and hope somebody can shed some light on this.
After some discussion in the comments I tried the following:
...
session.CreateSQLQuery(string.Format("SET ROLE {0} IDENTIFIED BY :rolePassword",
roleName))
.SetString("rolePassword", rolePassword)
.ExecuteUpdate();
...
I tried with both :named and ? (positional) parameters, with single quotes, double quotes, nothing seems to do the trick.
This code throws the famous ORA-00933: SQL command not properly ended error
Try using .SetParameter() instead of SetString(). I am currently using something like this and it works:
var cases = Session.CreateSQLQuery(sql)
.SetParameter("someID",thisIsMyValue)
.SetResultTransformer(NHibernate.Transform.Transformers.AliasToBean<SomeDTO>())
.List<SomeDTO>();
And the SQL looks like this:
var sql = "SELECT fieldA, fieldB FROM myTable WHERE myTable.ID = :someID"