inserting row into sqlite3 database from play 2.3/anorm: exception being thrown non-deterministically - jdbc

I have a simple web application based on the Play Framework 2.3 (scala), which currently uses sqlite3 for the database. I'm sometimes, but not always, getting exceptions caused by inserting rows into the DB:
java.sql.SQLException: statement is not executing
at org.sqlite.Stmt.checkOpen(Stmt.java:49) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.PrepStmt.executeQuery(PrepStmt.java:70) ~[sqlite-jdbc-3.7.2.jar:na]
...
The problem occurs in a few different contexts, all originating from SQL(statement).executeInsert()
For example:
val statementStr = "insert into session_record (condition_id, participant_id, client_timestamp, server_timestamp) values (%d,'%s',%d,%d)".format(conditionId,participantId,clientTime,serverTime)
DB.withConnection( implicit c => {
val ps = SQL(statement)
val pKey = populatedStatement.executeInsert()
// ...
}
When an exception is not thrown, pKey contains an option with the table's auto-incremented primary key. When an exception is thrown, the database's state indicate that the basic statement was executed, and if I take the logged SQL statement and try it by hand, it also executes without a problem.
Insert statements that aren't executed with "executeInsert" also work. At this point, I could just use ".execute()" and get the max primary key separately, but I'm concerned there might be some deeper problem I'm missing.
Some configuration details:
In application.conf:
db.default.driver=org.sqlite.JDBC
db.default.url="jdbc:sqlite:database/mySqliteDb.db"
My sqlite version is 3.7.13 2012-07-17
The JDBC driver I'm using is "org.xerial" % "sqlite-jdbc" % "3.7.2" (via build.sbt).

I ran into this same issue today with the latest driver, and using execute() was the closest thing to a solution I found.
For the sake of completion, the comment on Stmt.java for getGeneratedKeys():
/**
* As SQLite's last_insert_rowid() function is DB-specific not statement
* specific, this function introduces a race condition if the same
* connection is used by two threads and both insert.
* #see java.sql.Statement#getGeneratedKeys()
*/
Most certainly confirms that this is a hard to fix bug in the driver, due to SQLite's design, that makes executeInsert() not thread safe.

First it would be better not to use format for passing parameter to the statement, but using either SQL("INSERT ... {aParam}").on('aParam -> value) or SQL"INSERT ... $value" (with Anorm interpolation). Then if exception is still there I would suggest you to test connection/statement in a plain vanilla standalone Java test app.

Related

Saving entities with user defined primary key values

I'm quite new to the Spring Data JDBC library, but so far I'm really impressed.
Unfortunately JDBC driver for my database (SAP Hana) doesn't support retrieving of generated keys after INSERT (implementation of method PreparedStatement.getGeneratedKeys() throws UnsupportedOperationException).
Therefore I decided, that I'll not use the generated keys and will define the PK values before saving (+ implement Persistable.isNew()). However even if the PK values are defined before saving, whenever an INSERT operation is triggered, it fails on the error, that the generated keys can't be retrieved.
After investigating source code of the affected method (DefaultDataAccessStrategy.insert) I've recognized, that there is every time executed the JDBC operations update method with the KeyHolder parameter.
I've (naively) tweaked the code with following changes and it started to work:
if the PK is already defined, the JDBC operations update method without the KeyHolder is invoked
such PK is then immediately returned from the method
Following code snippet from the tweaked insert method illustrates the changes.
Object idValue = getIdValueOrNull(instance, persistentEntity);
if (idValue != null) {
RelationalPersistentProperty idProperty = persistentEntity.getRequiredIdProperty();
addConvertedPropertyValue(parameterSource, idProperty, idValue, idProperty.getColumnName());
/* --- tweak start --- */
String insertSql = sqlGenerator.getInsert(new HashSet<>(parameterSource.getIdentifiers()));
operations.update(insertSql, parameterSource);
return idValue;
/* --- tweak end --- */
}
So, the question is, if similar change can be implemented in the Spring Data JDBC to support also such use case as mine.
The question can be considered as closed as the related feature request is registered in the issue tracker.

SQlite NOT throwing exceptions for unknown columns in where clause

A little background: I am building an App on Laravel 5.6.33 (PHP 7.2 with sqlite 3).
So i have this weird case where in a test I am expecting an Exception but it never gets thrown. So I went digging and found that Laravel does not throw exceptions for invalid/non existent columns in where clause if the database driver is sqlite. The following code just returns an empty collection instead of throwing an exception.
\App\Tag::where('notAColumn', 'foo')->get();
Its weird and I checked all over the place to see if it was something wrong with my config and found nothing out of place. Debug is set to true etc. Im running this code for testing the app using an in memory sqlite database.
One other thing I noticed was that if I use whereRaw instead of where, exceptions are thrown as expected. so for example the following throws an exception.
\App\Tag::whereRaw('notAColumn = "foo"')->get();
Does anyone know why this maybe?
The difference between your two queries is the (non-)quoting of the column name:
Tag::where('notAColumn', 'foo')->get();
// select * from "tags" where "notAColumn" = 'foo'
Tag::whereRaw("notAColumn = 'foo'")->get(); // Literals are wrapped in single quotes.
// select * from "tags" where notAColumn = 'foo'
From the documentation:
If a keyword in double quotes (ex: "key" or "glob") is used in a context where it cannot be resolved to an identifier but where a string literal is allowed, then the token is understood to be a string literal instead of an identifier.
So SQLite would interpret Tag::where('notAColumn', 'notAColumn')->get(); as a comparison of two (identical) strings and therefore return all rows in the table.

Best way to return "expected" Oracle exceptions to Java Groovy/Grails

Background:
In my Oracle database, I have plenty of database calls which can cause exceptions. I currently have exception handlers for all these, which call an error package. To cut a long story short, a raise_application_error is eventually raised, for expected errors, or a raise for unexpected errors, and this is sent back to the calling Java Groovy/Grails application layer.
So, for example, if a user enters an id and clicks search, I run a select query from the database. If the id doesn't exist, I have a NO_DATA_FOUND exception which performs a raise_application_error with a custom error message (i.e. "ID entered cannot be found.")
However, the application development team say they're struggling with this. They are trying to perform unit testing in Groovy and ideally want a variable returned. The SQL exceptions I am currently returning cause all tests to fail as it is an exception. Their code looks like this:
void nameOfProcedure() {
String result = storedProcedure.callDBProcedure(ConnectionType.MSSQL, val1, val2)
log.info "SQL Procedure query result value: "+ result
assertEquals("1", result)
}
They can add something like this above the test:
#Test (expected = SQLException.class)
But this means all returning SQLExceptions will pass, regardless of whether they are the right exceptions for the issue at hand.
Question:
What is the best solution to this issue? I'm being pressed to return variables from my exception blocks, rather than raise_application_errors - but I'm very reluctant to do this, as I've always been told this is simply terrible practice. Alternatively, they could make changes on their end, but are obviously reluctant to.
What's the next step? Should I be coding to return "expected" errors as variables, as opposed to exceptions? For example, if someone enters an ID that isn't found:
BEGIN
SELECT id
FROM table
WHERE id = entered_id
EXCEPTION
WHEN NO DATA FOUND THEN
RETURN 'ID cannot be found';
END
Or alternatively, should they be following a guide like this which advises using Hamcrest matchers to create their own custom exception property, which they can check against in their JUnit testing. What is best practice here?
You're right, it's terrible practice. It just 'wagging the dog'; they're being lazy to work good and wish you to spoil application design in order to please them.
Generally, unit test with exception returned should looks something like this:
try {
String result = callDBProcedure();
fail("Result instead of exception");}
catch (OracleSQLException e) {
assertEquals(e.errorCode, RAISE_APPLICATION_ERROR_CODE);}
catch (Throwable t) {
fail("Unexpected error");
}
They can upgrade this as they wish. For example, they can develop procedure 'call the SP and convert exception to anything they wish' and use it in their tests. But they should not affect application design outside testing. Never.

BIRT report Exception when zero results returned from Data Set query

I am running the BIRT 4.3.2 report engine on an IBM WAS 8.5.5 server.
The reports load fine when there are actually results for the given criteria. I receive the following exception when there are no results:
org.eclipse.birt.data.engine.odaconsumer.ResultSet fetch
SEVERE: Cannot fetch the next data row.
Throwable occurred: org.eclipse.birt.report.data.oda.jdbc.JDBCException: Cannot move down to next row in the result set.
SQL error #1:Invalid operation: result set closed
com.ibm.db2.jcc.c.SqlException: Invalid operation: result set closed
at org.eclipse.birt.report.data.oda.jdbc.ResultSet.next(ResultSet.java:198)
I am aware of this topic:
developer.actuate.com/community/forum/index.php?/topic/25148-exception-is-thrown-when-there-is-no-data-retreived-from-query/
Since my report data source simply defines the JDBC class com.ibm.db2.jcc.DB2Driver and JNDI URL, it uses the WAS data source. I did try adding the allowNextOnExhaustedResultSet custom property to the data source as an integer value 1, but this did not fix anything for me. This was stated to only be a work-around anyways.
I asked on the BIRT forum if this would be fixed, no response? As suggested by IBM- Modify the application code to avoid calling ResultSet.next if there are no records or no more records. This is in the BIRT data engine code, for ResultSet class.
Are there any more work-arounds in the mean time?
It's probably also worth mentioning that this was working without issue on BIRT 4.2.0
Here is the fragment of code involved, in org.eclipse.birt.report.data.oda.jdbc.ResultSet.next
try
{
/* redirect the call to JDBC ResultSet.next() */
if ( currentRow < maxRows && rs.next( ) )
{
currentRow++;
return true;
}
return false;
}
catch ( SQLException e )
{
throw new JDBCException(ResourceConstants.RESULTSET_CURSOR_DOWN_ERROR , e );
}
As suggested in the link you provide, a "quick" workaround would be to check the state of the resultset in this "if" statement:
if ( currentRow < maxRows && !rs.isClosed() && rs.next( ) )
{
currentRow++;
return true;
}
return false;
Of course it requires to download the source code of BIRT in Eclipse, there is an automated process for this. I know this is probably not the kind of solution you expect, but it might be the only one. You don't have to compile the whole BIRT project, just export this class as a .jar and replace the old class in Eclipse and in your BIRT runtime environment.
it might be valuable to submit this as a patch in bugzilla of birt
The answer is, this is probably a bug in the BIRT code and can be fixed hopefully by upgrading to a future version.
The work around I suggest, is based on my discovery of the root cause of the exception.
In 4.2.0, I used an aggregative expression inside of a grid element. Updated to 4.3.2, this is where and only when the exception occurs. To fix, I created the same layout and resulting report using a table element and same aggregation expression and no longer receive the exception on no results returned.

RSpec - How to mock a stored procedure

Consider the following stored procedure:
CREATE OR REPLACE FUNCTION get_supported_locales()
RETURNS TABLE(
code character varying(10)
) AS
...
And the following method that call's it:
def self.supported_locales
query = "SELECT code FROM get_supported_locales();"
res = ActiveRecord::Base.connection.execute(query)
res.values.flatten
end
I'm trying to write a test for this method but I'm getting some problems while mocking:
it "should list an intersection of locales available on the app and on last fm" do
res = mock(PG::Result)
res.should_receive(:values).and_return(['en', 'pt'])
ActiveRecord::Base.connection.stub(:execute).and_return(res)
Language.supported_locales.should =~ ['pt', 'en']
end
This test succeds but any test that runs after this one gives the following message:
WARNING: there is already a transaction in progress
Why does this happen? Am I doing the mocking
The database is postgres 9.1.
Your test is running using database level transactions. When the test completes, the transaction is rolled back so that none of the changes made in the test are actually saved to the database. In your case, this rollback can't happen because you have stubbed out the execute method on the ActiveRecord connection.
You can disable transactions globally and switch to using DatabaseCleaner to enable/disable transactions for various tests. You could then set up to use transactions through DatabaseCleaner by default so your existing tests don't change, and then in this one test choose to disable transactions in favor of some other strategy (such as the null strategy since there is no cleaning to be done for this test).
This other SO post indicates you may be able to avoid disabling transactions globally and turn them off on a per test basis as well, I have not tried that myself though.

Resources