solr jdbc throws and exception as if something is wrong within implementation there - jdbc

I am trying to check the solr jdbc driver.
It seems to be working only with DbVisualizer & squirrelSQL.
As it is totally undocumented (properties, etc.) I've no idea what is the issue, but I keep on getting some weird error :
java.sql.SQLException which results from java.io.IOException which
results from org.noggit.JSONParser$ParserException: JSON Parse Error
char=<
It seems like internally something is going wrong there, as the engine expects JSON and receives XML.
The code is super straight forward :
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl");
Connection conn = DriverManager.getConnection("jdbc:solr://zootest01:2181/?collection=mycol");
Statement stat = conn.createStatement();
ResultSet rs = stat.executeQuery("select item_id from mycol limit 10");
The above is the most straightforward code for Java using JDBC.
The executeQuery will always throw an exception.
What is more weird is that tools like DBeaver has the same problem exactly.
I couldn't find any explanation to this behavior, unless something in the implementation is somehow hard-coded to the above 2 specific tools.

Related

jtds.jdbc.JtdsConnection.createBlob java.lang.AbstractMethodError

mvn:net.sourceforge.jtds/jtds/1.3.1-patch-20190523/jar
to save to a Microsoft SQL Server database.
Actually this JDBC driver is provided by Talend. For various reasons, I have some Java JDBC code which saves blobs to the database. This works fine with an Oracle DB.
try (Connection con = DriverManager.getConnection(this.connectionString, this.user, this.pw)) {
insert(con, ags, snr, productVersion, nummer, inhalt, this.migrationsBenutzer, format, daten);
}
public long insert(Connection con, String ags, String snr, int productVersion ,int nummer,String inhalt, String benutzer, String format, byte[] daten) throws SQLException {
long result = 0;
String timestamp = now();
try {
Blob blob = con.createBlob();
blob.setBytes(1, daten);
PreparedStatement ps = con.prepareStatement(this.insert);
...
ps.setBlob(9, blob);
result = ps.executeUpdate();
However when I call the method with a connection string for jtds:
jdbc:jtds:sqlserver://:1433;databaseName=
I get this exception
Exception in thread "main" java.lang.AbstractMethodError
at net.sourceforge.jtds.jdbc.JtdsConnection.createBlob(JtdsConnection.java:2776)
at de.iteos.dao.BildspeichernDAO.insert(BildspeichernDAO.java:60)
What can I do? If feasible I want to keep the method a generic as possible. Do I have to switch to another JDBC driver? I have read somewhere that this could be solved with a validation query. For this I would have to implement two different DataSources for Oracle and SQL Server right?
The problem is that jTDS is - as far as I know - no longer maintained, and available versions are stuck on JDBC 3.0 (Java 1.4) support. The method Connection.createBlob you're trying to call was introduced in JDBC 4.0 (Java 6). Attempts to call this method on a driver that was compiled against the JDBC 3.0 API will result in AbstractMethodError as the method from JDBC 4.0 (or higher) is not implemented.
The simplest and best solution is probably to switch to the Microsoft SQL Server JDBC driver.
Alternatively, you need to switch to the JDBC 3.0 options for populating a blob, that is, using the PreparedStatement.setBinaryStream(int, InputStream, int) method (not to be confused with setBinaryStream(int, InputStream) or setBinaryStream(int, InputStream, long) introduced in JDBC 4.0!), or possibly setBytes(int, byte[]), or using driver specific methods for creating blobs.
The validation query solution you mention doesn't apply here, that solves a problem with Connection.isValid that the data source implementation in some Tomcat versions calls by default, unless you configure a validation query. It has the same underlying cause (the isValid method was also introduced in JDBC 4.0).
I'm not sure what you mean with "I would have to implement two different DataSources for Oracle and SQL Server right?", you cannot use one and the same data source for two different databases, so you would need to configure two different instances anyway.

SQlite NOT throwing exceptions for unknown columns in where clause

A little background: I am building an App on Laravel 5.6.33 (PHP 7.2 with sqlite 3).
So i have this weird case where in a test I am expecting an Exception but it never gets thrown. So I went digging and found that Laravel does not throw exceptions for invalid/non existent columns in where clause if the database driver is sqlite. The following code just returns an empty collection instead of throwing an exception.
\App\Tag::where('notAColumn', 'foo')->get();
Its weird and I checked all over the place to see if it was something wrong with my config and found nothing out of place. Debug is set to true etc. Im running this code for testing the app using an in memory sqlite database.
One other thing I noticed was that if I use whereRaw instead of where, exceptions are thrown as expected. so for example the following throws an exception.
\App\Tag::whereRaw('notAColumn = "foo"')->get();
Does anyone know why this maybe?
The difference between your two queries is the (non-)quoting of the column name:
Tag::where('notAColumn', 'foo')->get();
// select * from "tags" where "notAColumn" = 'foo'
Tag::whereRaw("notAColumn = 'foo'")->get(); // Literals are wrapped in single quotes.
// select * from "tags" where notAColumn = 'foo'
From the documentation:
If a keyword in double quotes (ex: "key" or "glob") is used in a context where it cannot be resolved to an identifier but where a string literal is allowed, then the token is understood to be a string literal instead of an identifier.
So SQLite would interpret Tag::where('notAColumn', 'notAColumn')->get(); as a comparison of two (identical) strings and therefore return all rows in the table.

SQLException: Cannot submit statement in current context

I encountered this exception when calling a stored procedure through a prepared statement, however, it works for callable statement. I am wondering if it is a must to use callable statement for invoking a stored procedure in voltdb?
String sql = "{call get_city_by_country(?)}";
PreparedStatement stat = conn.prepareStatement(sql);
stat.setString(1, "china");
ResultSet results = stat.executeQuery();
Throws exception below:
Exception in thread "main" java.sql.SQLException: Cannot submit statement in current context: '{call get_city_by_country(?)};'.
at org.voltdb.jdbc.SQLError.get(SQLError.java:45)
at org.voltdb.jdbc.JDBC4PreparedStatement.executeQuery(JDBC4PreparedStatement.java:121)
This one works fine.
CallableStatement proc = conn.prepareCall(sql);
proc.setString(1, "china");
ResultSet results = proc.executeQuery();
It look like your driver only supports the CALL escape on CallableStatement. So you need to use CallableStatement instead.
Section 6.4 Java EE JDBC Compliance of the JDBC 4.2 specification however says (empasis mine):
Drivers must support stored procedures. The DatabaseMetaData method
supportsStoredProcedures must return true. The driver must also support
the full JDBC API escape syntax for calling stored procedures with the following methods on the Statement, PreparedStatement, and CallableStatement classes:
executeUpdate
executeQuery
execute
This means that your driver is not fully compliant with the Java EE JDBC Compliance requirements. You might want to consider filing a bug report with the driver vendor.

inserting row into sqlite3 database from play 2.3/anorm: exception being thrown non-deterministically

I have a simple web application based on the Play Framework 2.3 (scala), which currently uses sqlite3 for the database. I'm sometimes, but not always, getting exceptions caused by inserting rows into the DB:
java.sql.SQLException: statement is not executing
at org.sqlite.Stmt.checkOpen(Stmt.java:49) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.PrepStmt.executeQuery(PrepStmt.java:70) ~[sqlite-jdbc-3.7.2.jar:na]
...
The problem occurs in a few different contexts, all originating from SQL(statement).executeInsert()
For example:
val statementStr = "insert into session_record (condition_id, participant_id, client_timestamp, server_timestamp) values (%d,'%s',%d,%d)".format(conditionId,participantId,clientTime,serverTime)
DB.withConnection( implicit c => {
val ps = SQL(statement)
val pKey = populatedStatement.executeInsert()
// ...
}
When an exception is not thrown, pKey contains an option with the table's auto-incremented primary key. When an exception is thrown, the database's state indicate that the basic statement was executed, and if I take the logged SQL statement and try it by hand, it also executes without a problem.
Insert statements that aren't executed with "executeInsert" also work. At this point, I could just use ".execute()" and get the max primary key separately, but I'm concerned there might be some deeper problem I'm missing.
Some configuration details:
In application.conf:
db.default.driver=org.sqlite.JDBC
db.default.url="jdbc:sqlite:database/mySqliteDb.db"
My sqlite version is 3.7.13 2012-07-17
The JDBC driver I'm using is "org.xerial" % "sqlite-jdbc" % "3.7.2" (via build.sbt).
I ran into this same issue today with the latest driver, and using execute() was the closest thing to a solution I found.
For the sake of completion, the comment on Stmt.java for getGeneratedKeys():
/**
* As SQLite's last_insert_rowid() function is DB-specific not statement
* specific, this function introduces a race condition if the same
* connection is used by two threads and both insert.
* #see java.sql.Statement#getGeneratedKeys()
*/
Most certainly confirms that this is a hard to fix bug in the driver, due to SQLite's design, that makes executeInsert() not thread safe.
First it would be better not to use format for passing parameter to the statement, but using either SQL("INSERT ... {aParam}").on('aParam -> value) or SQL"INSERT ... $value" (with Anorm interpolation). Then if exception is still there I would suggest you to test connection/statement in a plain vanilla standalone Java test app.

porting tigase from derby to hsqldb ... how to call stored Java procedure and throw away (ignore) the result?

Trying to configure tigase to use hsqldb (hsqldb-1.8.0.9-1jpp.2) instead of derby (don't ask why, that's not the point) and everything works fine, except for setting some properties in the end. In Derby I had
CREATE procedure TigAddUserPlainPw(userId varchar(2049), userPw varchar(255))
PARAMETER STYLE JAVA
LANGUAGE JAVA
MODIFIES SQL DATA
DYNAMIC RESULT SETS 1
EXTERNAL NAME 'tigase.db.derby.StoredProcedures.tigAddUserPlainPw';
and
call TigAddUserPlainPw('db-properties', NULL);
When I try to replices this with hsqldb by
CREATE ALIAS TigAddUserPlainPw
FOR "tigase.db.derby.StoredProcedures.tigAddUserPlainPw";
and
CALL TigAddUserPlainPw('db-properties', NULL);
I get this error message
[root#tikanga scripts]# ./hsqldb-db-create.sh /var/lib/tigase/db/tigase
SQL Error at '/etc/tigase/database/hsqldb-schema-4-props.sql' line 1:
"CALL TigAddUserPlainPw('db-properties', NULL)"
Wrong data type: [Ljava.sql.ResultSet; in statement [CALL TigAddUserPlainPw(]
Any idea, what I am doing wrong?
You cannot use the Java static methods as they are. The Result[] parameters are not acceptable to HSQLDB 1.8.x.
It would be easier to convert to HSQLDB 2.0, as its stored procedure support has improved over version 1.8.
Your example shows we need to make some more improvements to HSQLDB to support these procedure declarations.
As far as I know, HSQLDB supports Java functions that return a ResultSet (Fred will correct me if I'm wrong): http://hsqldb.org/doc/2.0/guide/sqlroutines-chapt.html#N12835
You would need a new method:
public static ResultSet tigAddUserPlainPw(
String userId, String userPw) throws SQLException;

Resources