Disabling prepared statements in dbcp+spring+hibernate+jdbc? - spring

I am currently enhancing an application that uses spring and hibernate.There are multiple instances where the application communicates with the db (postgres) via prepared statements.
The application until now, communicated with postgres via dbcp.
Change:
The application now communicated to postgres via pgbouncer.
i.e.: application -> dbcp -> pgbouncer -> postgres
I understand this wouldn't be the most ideal solution i.e: having 2 poolers. But due to the current architecture we require both of them.
Requirement:
pgbouncer does not support prepared statements in transaction mode & therefore have to be eliminated.
Changes to eliminate prepared statement.
1) psql: VERSION = 9.2.6
no change
2) pgbouncer: In the config file set the following attribures
ignore_startup_parameters=extra_float_digits
pool_mode=transaction
server_reset_query=
3) jdbc : The prepared threshold has been set accordingly.
i.e. : jdbc:postgresql://localhost:6541/postgres?prepareThreshold=0
JAVA VERSION = 1.7.0_51
JDBC DRIVER = postgresql-9.3-1102.jdbc41-3.jar
4) dbcp :
poolPreparedStatements = false
maxOpenPreparedStatements = 0
5) hibernate : no changes
6) spring : no changes
Issue:
Inspite of all these changes I still see prepared statements trying to be created & transactions failing due to that.
"ERROR: prepared statement "S_21" does not exist; nested exception is org.postgresql.util.PSQLException: ERROR: prepared statement "S_21 " does not exist"
I have removed all logical changes that used a prepared statement.
How can I prevent the other prepared statements from being created?
Does spring or hibernate internally create prepared statements for their usage? If yes, how do I disable them?

I understand that this post is from a few years ago but I am still facing the same issues. Unfortunately the suggested changes are not working for my current use case.
Facing the following issue:
- "Error: Prepared statement xxx does not exist"
- "Error: prepared statement xxx already exists"
Tried following the proposed changed but still getting the same error:
Tech Stack:
Spring Boot (2.1.7.RELEASE)
Spring Data (JPA + Hibernate)
The application is deployed on Heorku using the Heroku Postgre
Client side PgBouncer.
Modified the DB url with the following properties: "?sslmode=disable&prepareThreshold=0&preparedStatementCacheQueries=0"
The following settings are added to Heroku config:
PGSSLMODE= disable
PGBOUNCER_POOL_MODE = transaction
PGBOUNCER_IGNORE_STARTUP_PARAMETERS = extra_float_digits
set PGBOUNCER_URLS config value to DB name Urls
Spring Data is set up to use two Databases for (Read/Write & Read).
Using the #Transactional(readOnly=true) with #Around("#annotation(transactional)")
public Object proceed(ProceedingJoinPoint proceedingJoinPoint, Transactional transactional) throws Throwable {
try {
if(transactional.readOnly()) {
RoutingDataSource.setReplicaRoute();
LOGGER.info("Routing database call to the read replica");
}
return proceedingJoinPoint.proceed();
} finally {
RoutingDataSource.clearReplicaRoute();
}
}

The following configuration ia working on my system without any ERROR: prepared statement "S_21" does not exist; errors. Hope it helps:
pgBouncer 1.6.1, pool_mode = transaction
Added to Hibernate db-connection string: prepareThreshold=0
Postgresql-JDBC 9.4-1203-jdbc41 driver
Disable Prepared statements in Hibernate 4.x
<property name="hibernate.cache.use_query_cache">false</property>

Related

JDBC DatabaseMetaData method not implemented by JDBC(T4SQLMX) driver

I am setting up a Spring-boot application to connect to HP NonStop Tandem's SQL/MX. First I achieved this connection by hard-coding the jdbc parameters like dataSource, URL, etc in the service section of the application and it worked (I was able to access tables by executing query).
Now I am trying to remove the hard coded part and have my database related info in application.properties file, but now I am getting the following error
org.springframework.jdbc.support.MetaDataAccessException: JDBC DatabaseMetaData method not implemented by JDBC driver - upgrade your driver; nested exception is java.lang.AbstractMethodError: Method com/tandem/t4jdbc/SQLMXConnection.isValid(I)Z is abstract
Can someone help me understand the root cause? The same driver jar is being used when hard-coding the datasource details and it worked but not working when having the data source properties in application.properties and needs an upgrade to the jar.
I encountered the same exception when using Spring Data JPA in a Spring Boot application, the JTDS driver and the Hikari connection pool. In my case I discovered that the following fixed the problem:
Examining the class com.zaxxer.hikari.pool.PoolBase, the following can be observed:
this.isUseJdbc4Validation = config.getConnectionTestQuery() == null;
Thus JDBC 4 validation will not be attempted if there is a connection test query configured. In a Spring Boot application, this can be accomplished like this:
spring.datasource.hikari.connection-test-query=select 1;
Regretfully I do not have any experience with the T4SQLMX driver but nevertheless hope this can be of some use.
I recently fought through the same issue, for me I was using a JDBC type 3 driver; but my spring implementation only supported a type 4 driver, thus when the method you linked above was attempted to be called, it caused the error.
I suggest you look for a type 4 driver for your particular database and see if that resolves your issue.

WebSphere insert/update statements with SQL-SERVER hangs with REQUIRES_NEW propagation

We are facing an issue in our spring batch application when we are deploying the application on WebSphere.
Example: One class contains parent() method and Second class contains child() method, where child method requires a new transaction. After execution of the methods when transaction is committed the commit routine hangs and nothing happens further.
#Transactional //using current transaction
public void parent(){
child();
}
#Transactional(propagation=REQUIRES_NEW) //creates new transaction
public void child(){
//Database save statements including update, insert and deletes
}
This issue only persists in WebSphere and code works fine on our local machine where we are using tomcat as web container.
WebSphere logs/stacktrace shows that the WebSphere prepared statement keeps on waiting for the response from the database. At the same time update and inserts are locked out on the affected tables i.e. if we run an insert or update query manually on the affected table the query doesn't execute.
We are using Spring JPA for data persistence and Spring’s JpaTransactionManager for transaction management and MSSQLServer database.
Is it that WebSphere does not support creating new transaction from existing transaction?
Yes, the pattern you are describing is supported by WebSphere Application Server. Given that this involved locked entries within the database, you might be running into a difference between the application servers in which transaction isolation level is used by default. In WebSphere Application Server, you get a default of java.sql.Connection.TRANSACTION_REPEATABLE_READ for SQL Server, whereas I think in most other cases you end up with a default of java.sql.Connection.TRANSACTION_READ_COMMITTED (less locking). If the default value is the problem, you can change it on the data source configuration.
If you are using WebSphere Application Server Liberty, then the default isolation level can be configured in server.xml as a property of the dataSource element, like this,
<dataSource isolationLevel="TRANSACTION_READ_COMMITTED" jndiName=...
If you are using WebSphere Application Server traditional, then the default isolation level can be configured as the webSphereDefaultIsolationLevel custom property, which can be set to the numeric value of the isolation level constant on java.sql.Connection (value for TRANSACTION_READ_COMMITTED is 2).
See this linked article for the steps of doing so via the admin console.

How to disable automatic table modification in jHipster?

I'm trying connect my jhipster app with a custom mssql database. Right now it is connected to a fresh default MySQL db tied up with liquibase and has the default entites that come out-of-the-box with jhipster. I want to do 2 things :
Prevent any db modification scripts that liquibase may run on start up e.g. entity creation
Saftely move over to a different db with old applicaiton data and many custom tables than the one that is fresh and configured by default in jhipster .
To do '1' I tried to do the following in
public class DatabaseConfiguration {
liquibase.setDropFirst(liquibaseProperties.isDropFirst());
if (env.acceptsProfiles(Constants.SPRING_PROFILE_NO_LIQUIBASE)) {
liquibase.setShouldRun(false);
} else {
liquibaseProperties.setEnabled(false); // <<<<<< I DISABLED IT HERE
liquibase.setShouldRun(liquibaseProperties.isEnabled());
log.debug("Configuring Liquibase");
}
}
But still I can see that liquibase scripts are being run in start-up. Please advice if I'm doing this correct.
For #1, you could do it several ways, as you have both mssql and MySQL: you could either use the JDBC URL in DatabaseConfiguration or modify the Liquibase changelogs to add conditions on dbms to exclude them for mssql
For #2, you should look for existing tools to convert from one database engine to another.

datanucleus + jpa + oracle. Strange error with tables not existing

I have a strange issue when I try to use Datanucleus to access an Oracle database.
In short, what happens is this :
I run my application; when datanucleus initializes, it complains that it cannot find the tables (although they are in there).
I stop the application, I drop the tables, I add the
datanucleus.autoCreateSchema = true
...property in persistence.xml, and everything works - tables are created and then the select works.
I stop the application again, and then I try to start it with the above parameter disabled.
The error comes back although it was Datanucleus who created the tables in the first place, and now it complains it can't find them.
also please note that the same setup works with a postgresql database behind, without issues.
Can somebody please help ?
A few details about my setup :
I'm using Oracle thin driver.
My entity classes are annotated like this :
#Entity
#Table(name = "tablename1", schema = "schema2000")
Please note that everything works OK if I remove the schema=... from annotation
Error message is :
16:05:40,216 DEBUG [DataNucleus.Connection] - Setting autocommit=false to connection: com.mchange.v2.c3p0.impl.NewProxyConnection#1dff2e1b
16:05:40,216 DEBUG [DataNucleus.Connection] - Connection "com.mchange.v2.c3p0.impl.NewProxyConnection#1dff2e1b" opened with isolation level "read-committed"
16:05:40,904 DEBUG [DataNucleus.Datastore.Schema] - Check of existence of schema2000.tablename1 returned table type of null
16:05:40,905 DEBUG [DataNucleus.Datastore.Schema] - An error occurred while auto-creating schema elements - rolling back
16:05:41,109 DEBUG [DataNucleus.Connection] - Connection "com.mchange.v2.c3p0.impl.NewProxyConnection#1dff2e1b" non enlisted to a transaction is being committed.
16:05:41,110 DEBUG [DataNucleus.Connection] - Connection "com.mchange.v2.c3p0.impl.NewProxyConnection#1dff2e1b" closed
javax.persistence.PersistenceException: Required table missing : "schema2000.tablename1" in Catalog "" Schema "schema2000". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"
at org.datanucleus.api.jpa.NucleusJPAHelper.getJPAExceptionForNucleusException(NucleusJPAHelper.java:274)
at org.datanucleus.api.jpa.JPAEntityManager.merge(JPAEntityManager.java:519)
Suggest you look closer at case-sensitivity of your identifiers. DataNucleus logs what the JDBC driver allows with a line like
Supported Identifier Cases : "MixedCase" UPPERCASE "MixedCase-Sensitive"
so possibly it requires the schema in UPPERCASE or maybe quoted (all RDBMS are different, and inclusive some differ depending on the operating system they're running on)
Obviously embedding datastore-specific info in annotations is not recommended.

How to use HSQLDB as a datasource in Websphere Application Server?

I try to set up a local development infrastructure and I want to use HSQLDB as a datasource with my WAS 6.1. I already know that I have to use Apache DBCP to get a connection pooling, but I'm stuck when my application tries to get the first connection.
What I've done
In WAS I created a JDBC provider with the class org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS and removed everything from the classpath input field. Then I put commons-dbcp.jar, commons-pool.jar and hsqldb.jar in MYAPPSERVERDIRECTORY/lib/ext.
Then I created a new datasource with that provider. I added the following custom properties:
driver=org.hsqldb.jdbc.JDBCDriver
url=jdbc:hsqldb:file:///C:/mydatabase.db;shutdown=true
user=SA
password=
My Problem
When I run my application and the first connection to the database is made, I get the following exception:
---- Begin backtrace for Nested Throwables
java.sql.SQLException: No suitable driverDSRA0010E: SQL-Status = 08001, Fehlercode = 0
at java.sql.DriverManager.getConnection(DriverManager.java:592)
at java.sql.DriverManager.getConnection(DriverManager.java:196)
at org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS.getPooledConnection(DriverAdapterCPDS.java:205)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper$1.run(InternalGenericDataStoreHelper.java:918)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper.getPooledConnection(InternalGenericDataStoreHelper.java:955)
at com.ibm.ws.rsadapter.spi.WSRdbDataSource.getPooledConnection(WSRdbDataSource.java:1437)
at com.ibm.ws.rsadapter.spi.WSManagedConnectionFactoryImpl.createManagedConnection(WSManagedConnectionFactoryImpl.java:1089)
at com.ibm.ejs.j2c.FreePool.createManagedConnectionWithMCWrapper(FreePool.java:1837)
at com.ibm.ejs.j2c.FreePool.createOrWaitForConnection(FreePool.java:1568)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:2338)
at com.ibm.ejs.j2c.ConnectionManager.allocateMCWrapper(ConnectionManager.java:909)
at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:599)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:439)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:408)
Any tips on this? I suspect I'm using a wrong class from hsqldb, or maybe my JDBC url is wrong...
In the example given in BDCP docs, the org.hsqldb.jdbcDriver class is used as the driver. The org.hsqldb.jdbc.JDBCDriver is supported only in HSQLDB 2.x, but the other class is supported by all versions of HSQLDB.

Resources