I get a strange Exception in our oracle jdbc driver. How can a native method throw an OutOfBounds Exception? I think it has something to to with the blob in one of the columns.
We use as our server solaris x86, sun java 6, websphere. The databse is a oracle 11 und ojdbc 5.
With my local configuration with windows XP everything works fine.
Any ideas?
java.lang.ArrayIndexOutOfBoundsException
at oracle.jdbc.driver.T2CStatement.t2cDefineFetch(Native Method)
at oracle.jdbc.driver.T2CPreparedStatement.doDefineFetch(T2CPreparedStatement.java:1020)
at oracle.jdbc.driver.T2CPreparedStatement.fetch(T2CPreparedStatement.java:1252)
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:373)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:277)
In the code I do nothing special. Create a prepared statement, fill in the parameters and execute the query.
preparedStatement = connection.prepareStatement(queryAsString);
preparedStatement.setObject( [some paramters...] );
ResultSet result = preparedStatement.executeQuery();
result.setFetchSize(100);
while (result.next()) {
int count = Fields.getFieldsCount();
Object[] objects = new Object[count];
for (int j = 0; j < count; ++j) {
Object object = result.getObject(j + 1);
objects[j] = object;
}
list.add(objects);
}
result.close();
But I dont think the problem is in the java code because the same code did work on java 5 with ojdbc14 and a sparc solaris. I guess its more a configuration thing.
Related
I'm migrating a legacy application using DB2 iSeries on AS400 that has a specific behavior that I have to reproduce using .NET and DB2.Data.DB2.iSeries client for .NET.
What I'm describing works for me with DB2 non AS400 but in AS400 DB2 it worlks for the legacy application i'm replacing - but not with my application.
The behavior in the original application:
Begin Transaction
ExecuteReader () => Select col1 from table1 where col1 = 1 for update.
The row is now locked. anyone else who tries to run Select for update should fail.
Close the Reader opened in line 2.
The row is now unlocked. - anyone else who tried to run select for update should succeed.
Close transaction and live happily ever after.
In my .NET code I have two problems:
Step 2 - only checks if the row is already locked - but doesn't actually lock it. so another user can and does run select for update - WRONG BEHAVIOUR
Once that works - I need the lock to get unlocked when the reader is closed (step 4)
Here's my code:
var cb = new IBM.Data.DB2.iSeries.iDB2ConnectionStringBuilder();
cb.DataSource = "10.0.0.1";
cb.UserID = "User";
cb.Password = "Password";
using (var con = new IBM.Data.DB2.iSeries.iDB2Connection(cb.ToString()))
{
con.Open();
var t = con.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
using (var c = con.CreateCommand())
{
c.Transaction = t;
c.CommandText = "select col1 from table1 where col1=1 FOR UPDATE";
using (var r = c.ExecuteReader())
{
while (r.Read()) {
MessageBox.Show(con.JobName + "The Row Should Be Locked");
}
}
MessageBox.Show(con.JobName + "The Row Should Be unlocked");
}
}
When you run this code twice - you'll see both processes reach the "This row should be locked" which is the problem I'm describing.
The desired result would be that the first process will reach the "This row should be locked" and that the second process will fail with resource busy error.
Then when the first process reaches the second message box - "the row should be unlocked" the second process( after running again ) will reach the "This row should be locked" message.
Any help would be greatly appreciated
The documentation says:
When the UPDATE clause is used, FETCH operations referencing the cursor acquire an exclusive row lock.
This implies a cursor is being used, and the lock occurs when the fetch statement is executed. I don't see a cursor, or a fetch in your code.
Now, whether .NET handles this as a cursor, I don't know, but the DB2 UDB documentation does not have this notation.
Isolation Level allows this behavior. Reading rows that are locked.
ReadUncommitted
A dirty read is possible, meaning that no shared locks are issued and no exclusive locks are honored.
After much investigations we created a work around in the form of a stored procedure that performs the lock for us.
The stored procedure looks like this:
CREATE PROCEDURE lib.Select_For_Update (IN SQL CHARACTER (5000) )
MODIFIES SQL DATA CONCURRENT ACCESS RESOLUTION WAIT FOR OUTCOME
DYNAMIC RESULT SETS 1 OLD SAVEPOINT LEVEL COMMIT ON RETURN
NO DISALLOW DEBUG MODE SET OPTION COMMIT = *CHG BEGIN
DECLARE X CURSOR WITH RETURN TO CLIENT FOR SS ;
PREPARE SS FROM SQL ;
OPEN X ;
END
Then we call it using:
var cb = new IBM.Data.DB2.iSeries.iDB2ConnectionStringBuilder();
cb.DataSource = "10.0.0.1";
cb.UserID = "User";
cb.Password = "Password";
using (var con = new IBM.Data.DB2.iSeries.iDB2Connection(cb.ToString()))
{
con.Open();
var t = con.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
using (var c = con.CreateCommand())
{
c.Transaction = t;
c.CommandType = CommandType.StoredProcedure;
c.AddParameter("sql","select col1 from table1 where col1=1 FOR UPDATE");
c.CommandText = "lib.Select_For_Update"
using (var r = c.ExecuteReader())
{
while (r.Read()) {
MessageBox.Show(con.JobName + "The Row Should Be Locked");
}
}
MessageBox.Show(con.JobName + "The Row Should Be unlocked");
}
}
We don't like it - but it works.
The information in the MVStore docs on backing up a database is a little vague, and I'm not familiar with all the concepts and terminology, so I wanted to see if the approach I came up with makes sense.
I'm a Clojure programmer, so please forgive my Java here:
// db is an MVStore instance
FileStore fs = db.getFileStore();
FileOutputStream fos = java.io.FileOutputStream(pathToBackupFile);
FileChannel outChannel = fos.getChannel();
try {
db.commit();
db.setReuseSpace(false);
ByteBuffer bb = fs.readFully(0, fs.size());
outChannel.write(bb);
}
finally {
outChannel.close();
db.setReuseSpace(true);
}
Here's what it looks like in Clojure in case my Java is bad:
(defn backup-db
[db path-to-backup-file]
(let [fs (.getFileStore db)
backup-file (java.io.FileOutputStream. path-to-backup-file)
out-channel (.getChannel backup-file)]
(try
(.commit db)
(.setReuseSpace db false)
(let [file-contents (.readFully fs 0 (.size fs))]
(.write out-channel file-contents))
(finally
(.close out-channel)
(.setReuseSpace db true)))))
My approach seems to work, but I wanted to make sure I'm not missing anything or see if there's a better way. Thanks!
P.S. I used the H2 tag because MVStore doesn't exist and I don't have enough reputation to create it.
The docs currently say:
The persisted data can be backed up at any time, even during write
operations (online backup). To do that, automatic disk space reuse
needs to be first disabled, so that new data is always appended at the
end of the file. Then, the file can be copied. The file handle is
available to the application. It is recommended to use the utility
class FileChannelInputStream to do this.
The classes FileChannelInputStream and FileChannelOutputStream convert a java.nio.FileChannel into a standard InputStream and OutputStream. There is existing H2 code in BackupCommand.java that shows how to use them. We can improve upon it using Java 9 input.transferTo(output); to copy the data:
public void backup(MVStore s, File backupFile) throws Exception {
try {
s.commit();
s.setReuseSpace(false);
try(RandomAccessFile outFile = new java.io.RandomAccessFile(backupFile, "rw");
FileChannelOutputStream output = new FileChannelOutputStream(outFile.getChannel(), false)){
try(FileChannelInputStream input = new FileChannelInputStream(s.getFileStore().getFile(), false)){
input.transferTo(output);
}
}
} finally {
s.setReuseSpace(true);
}
}
Note that when you create the FileChannelInputStream you have to pass false to tell it to not close the underlying file channel when the stream is closed. If you don't do that it will close the file that your FileStore is trying to use. That code uses try-with-resource syntax to make sure that the output file is properly closed.
In order to try this, I checked out the mvstore code then modified the TestMVStore to add a testBackup() method which is similar to the existing testSimple() code:
private void testBackup() throws Exception {
// write some records like testSimple
String fileName = getBaseDir() + "/" + getTestName();
FileUtils.delete(fileName);
MVStore s = openStore(fileName);
MVMap<Integer, String> m = s.openMap("data");
for (int i = 0; i < 3; i++) {
m.put(i, "hello " + i);
}
// create a backup
String fileNameBackup = getBaseDir() + "/" + getTestName() + ".backup";
FileUtils.delete(fileNameBackup);
backup(s, new File(fileNameBackup));
// this throws if you accidentally close the input channel you get from the store
s.close();
// open the backup and verify
s = openStore(fileNameBackup);
m = s.openMap("data");
for (int i = 0; i < 3; i++) {
assertEquals("hello " + i, m.get(i));
}
s.close();
}
With your example, you are reading into a ByteBuffer which must fit into memory. Using the stream transferTo method uses an internal buffer that is currently (as at Java11) set to 8192 bytes.
I am referring to this documentation. http://www-01.ibm.com/support/docview.wss?uid=swg21981328. As per the article if we use executeBatch method then inserts will be faster (The Netezza JDBC driver may detect a batch insert, and under the covers convert this to an external table load and external table load will be faster). I had to execute millions of insert statements and i am getting only a speed of 500 records per minute per connection max. Is there any better way to load data faster to netezza via jdbc connection? I am using spark and jdbc connection to insert the records.Why external table via loading is not happening even when i am executing in batches. Given below is the spark code i am using,
Dataset<String> insertQueryDataSet.foreachPartition( partition -> {
Connection conn = NetezzaConnector.getSingletonConnection(url, userName, pwd);
conn.setAutoCommit(false);
int commitBatchCount = 0;
int insertBatchCount = 0;
Statement statement = conn.createStatement();
//PreparedStatement preparedStmt = null;
while(partition.hasNext()){
insertBatchCount++;
//preparedStmt = conn.prepareStatement(partition.next());
statement.addBatch(partition.next());
//statement.addBatch(partition.next());
commitBatchCount++;
if(insertBatchCount % 10000 == 0){
LOGGER.info("Before executeBatch.");
int[] execCount = statement.executeBatch();
LOGGER.info("After execCount." + execCount.length);
LOGGER.info("Before commit.");
conn.commit();
LOGGER.info("After commit.");
}
}
//execute remaining statements
statement.executeBatch();
int[] execCount = statement.executeBatch();
LOGGER.info("After execCount." + execCount.length);
conn.commit();
conn.close();
});
I tried this approach(batch insert) but found very slow,
So I put all data in CSV & do external table load for each csv.
InsertReq="Insert into "+ tablename + " select * from external '"+ filepath + "' using (maxerrors 0, delimiter ',' unase 2000 encoding 'internal' remotesource 'jdbc' escapechar '\' )";
Jdbctemplate.execute(InsertReq);
Since I was using java so JDBC as source & note that csv file path is in single quotes .
Hope this helps.
If you find better than this approach, don't forget to post. :)
We have a very simple Java code which executes an Oracle stored procedure using CallableStatement API. An Oracle stored procedure doesn't have an OUTPUT parameter defined, however in the Java Code we are trying to register an OUT parameter and retrieve it :
cs = cnDBCon.prepareCall("{call "+strStoredProcedureName+ "}");
cs.registerOutParameter(1, Types.NUMERIC);
cs.execute();
int isOk = cs.getInt(1);
According to Java API ( https://docs.oracle.com/javase/8/docs/api/java/sql/CallableStatement.html#getInt-int- ) when a return value is SQL NULL, the result is 0.
It worked perfectly fine when we were running on Java7/WebLogic 12.1.3. Since we switched to Java8/WebLogic 12.2.1 we started to get the error :
java.sql.SQLException: Missing defines
at oracle.jdbc.driver.Accessor.isNull(Accessor.java:744)
at oracle.jdbc.driver.NumberCommonAccessor.getInt(NumberCommonAccessor.java:73)
at oracle.jdbc.driver.OracleCallableStatement.getInt(OracleCallableStatement.java:1815)
at oracle.jdbc.driver.OracleCallableStatementWrapper.getInt(OracleCallableStatementWrapper.java:780)
at weblogic.jdbc.wrapper.CallableStatement_oracle_jdbc_driver_OracleCallableStatementWrapper.getInt(Unknown Source)
The obvious choice is to update the stored procedure or java code, but I am wondering about the reason the behavior changed. Is it a more strict JDBC driver?
When I am trying to upload an Image using below code, I am getting following error : java.sql.SQLException: ORA-01460: unimplemented or unreasonable conversion requested
File image = new File("D:/"+fileName);
preparedStatement = connection.prepareStatement(query);
preparedStatement.setString(1,"Ayush");
fis = new FileInputStream(image);
preparedStatement.setBinaryStream(2, (InputStream)fis, (int)(image.length()));
int s = preparedStatement.executeUpdate();
if(s>0) {
System.out.println("Uploaded successfully !");
flag = true;
}
else {
System.out.println("unsucessfull to upload image.");
flag = false;
}
Please help me out.
DB Script :
CREATE TABLE ESTMT_SAVE_IMAGE
(
NAME VARCHAR2(50),
IMAGE BLOB
)
Its first cause is incompatible conversion but after seeing your DB script, I assume that you are not doing any conversion in your script.
There are other reported causes of the ORA-01460 as well:
Incompatible character sets can cause an ORA-01460
Using SQL Developer, attempting to pass a string to a bind variable value in excess of 4000 bytes can result in an ORA-01460
With ODP, users moving from the 10.2 client and 10.2 ODP to the 11.1 client and 11.1.0.6.10 ODP reported an ORA-01460 error. This was a bug that should be fixed by patching ODP to the most recent version.
Please see this