I am setting up a Spring-boot application to connect to HP NonStop Tandem's SQL/MX. First I achieved this connection by hard-coding the jdbc parameters like dataSource, URL, etc in the service section of the application and it worked (I was able to access tables by executing query).
Now I am trying to remove the hard coded part and have my database related info in application.properties file, but now I am getting the following error
org.springframework.jdbc.support.MetaDataAccessException: JDBC DatabaseMetaData method not implemented by JDBC driver - upgrade your driver; nested exception is java.lang.AbstractMethodError: Method com/tandem/t4jdbc/SQLMXConnection.isValid(I)Z is abstract
Can someone help me understand the root cause? The same driver jar is being used when hard-coding the datasource details and it worked but not working when having the data source properties in application.properties and needs an upgrade to the jar.
I encountered the same exception when using Spring Data JPA in a Spring Boot application, the JTDS driver and the Hikari connection pool. In my case I discovered that the following fixed the problem:
Examining the class com.zaxxer.hikari.pool.PoolBase, the following can be observed:
this.isUseJdbc4Validation = config.getConnectionTestQuery() == null;
Thus JDBC 4 validation will not be attempted if there is a connection test query configured. In a Spring Boot application, this can be accomplished like this:
spring.datasource.hikari.connection-test-query=select 1;
Regretfully I do not have any experience with the T4SQLMX driver but nevertheless hope this can be of some use.
I recently fought through the same issue, for me I was using a JDBC type 3 driver; but my spring implementation only supported a type 4 driver, thus when the method you linked above was attempted to be called, it caused the error.
I suggest you look for a type 4 driver for your particular database and see if that resolves your issue.
Related
I'm using oracle JDBC driver in a dynamic web application have this exception
...
Caused by: java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
at com.zaxxer.hikari.util.DriverDataSource.<init>(DriverDataSource.java:106)
... 56 more
however the driver should be correctly loaded while deployment
a picture showing Deployment Assembly property with oracle driver in it
Edit: eclipse show me this warning after cleaning the project
Classpath entry xxxx/ojdbc6.jar will not be exported or published. Runtime ClassNotFoundExceptions may result. service P/service Classpath Dependency Validator Message
Edit: information about my system:
1- tomcat 8.5: I've the driver in $CATALINA_HOME/lib btw
2- oracle 11g [ojdbc6]: with HikariCP
3- eclipse 4.14.0: +maven using jersey webapp archetype
Edit: it worked previously with a normal (not web) project
Edit: this is how i had configured the datasource
config.setJdbcUrl(jdbcUrl);
config.setUsername(username);
config.setPassword(passsword);
config.addDataSourceProperty( "cachePrepStmts" , "true" );
config.addDataSourceProperty( "prepStmtCacheSize" , "250" );
config.addDataSourceProperty( "prepStmtCacheSqlLimit" , "2048" );
// config.addDataSourceProperty("driverClassName", "oracle.jdbc.driver.OracleDriver");
config.setDriverClassName("oracle.jdbc.driver.OracleDriver")
}
ds = new HikariDataSource(config);
//some code
The only thing that worked for me so far is adding ojdbc6.jar manually to jvm libraries /lib/jvm/java-8-openjdk/jre/lib/ext then rebooting my machine. i know this screams BAD PRACTICE! because others will have to that too. i won't accept this answer, and waiting for a good solution.
Can you take a look at Tomcat Servlet ?
I am getting the following error below
Unexpected error
org.springframework.transaction.CannotCreateTransactionException:
**CannotCreateTransactionException:** Could not open Hibernate Session for transaction; nested exception is org.hibernate.exception.GenericJDBC
Exception: Could not open connection
Here is the scenario in which it occurs :
I have recently moved some table from mysql to mongo. The code is written in such a way that either data would be taken from mongo/mysql.
The code is written in a method block which is annotated with #Transactional provided by spring framework.
There is hibernate layer which is using transaction provided by spring. c3p0 is the connection pool.
The parameter of connection pool is
hibernate.c3p0.min_size=5
hibernate.c3p0.timeout=1200
hibernate.c3p0.max_size=35
hibernate.c3p0.max_statements=50
The problem comes when we try to pull the data from mongo.Looks like the transaction is not getting closed because of mongo operation.The database connection is not getting released .It reaches the max size defined in the pool.
Tried the query in DB to find out the connection
show status like '%onn%';
Any suggestion to resolve this would really help.
Thanks
our application is using Spring for TX management and is marking certain transactions as readonly. When deploying our application on websphere (8.5.5.3) with a Oracle JDBC Connection we are getting exceptions like the following:
Caused by: java.sql.SQLException: DSRA9010E: 'setReadOnly' is not supported on the WebSphere java.sql.Connection implementation.
at com.ibm.ws.rsadapter.spi.InternalOracleDataStoreHelper.setReadOnly(InternalOracleDataStoreHelper.java:371)
at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.setReadOnly(WSJdbcConnection.java:3646)
at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy$LazyConnectionInvocationHandler.getTargetConnection(LazyConnectionDataSourceProxy.java:410)
at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy$LazyConnectionInvocationHandler.invoke(LazyConnectionDataSourceProxy.java:376)
at com.sun.proxy.$Proxy476.getMetaData(Unknown Source)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy$TransactionAwareInvocationHandler.invoke(TransactionAwareDataSourceProxy.java:239)
at com.sun.proxy.$Proxy476.getMetaData(Unknown Source)
I know what websphere is trying to tell me but I was wondering if there is a way to disable this check so that Connection.setReadonly calls are just ignored instead of throwing an exception.
Of course I could also change the application to not use readonly transactions but that would be much more complicated.
Try to unwrap the connection object like this:
Context ic = new InitialContext();
DataSource ds = (DataSource)ic.lookup("jdbc/OracleDS");
Connection conn = ds.getConnection();
if (conn.isWrapperFor(oracle.jdbc.OracleConnection.class)) {
// Returns an object that implements the given interface to
// allow access to non-standard methods, or standard methods
// not exposed by the proxy.
oracle.jdbc.OracleConnection oraCon = conn.unwrap(oracle.jdbc.OracleConnection.class);
// Do some Oracle-specific work here.
oraCon.setReadOnly(readOnly);
....
}
conn.close();
See WebSphere Application Server and the JDBC 4.0 Wrapper Pattern
Oracle JDBC (12c onwards; perhaps 11g as well?) can be tricky to use with readonly -
according to https://marschall.github.io/2017/01/28/oracle-read-only.html:
Calling Connection.setReadOnly(true) with the 12c driver no longer establishes a read only transaction
This means that Spring (before version 4.3.7) also struggles to set read only transactions with Oracle JDBC (see previous link).
To overcome this, you need to manually include SET TRANSACTION READ ONLY in your SQL, instead of relying on Spring's #Transactional(readOnly=true).
However, from Spring 4.3.7, the transaction now acts correctly, and so you should no longer see this issue (https://github.com/spring-projects/spring-framework/issues/19774)
I'm seeing something odd when I run a query in an application deployed in Oracle Application Server 10.1.3, with Oracle10g.
When I run a statement against the database directly (e.g. a standalone app that calls a DAO implemented with hibernate) I see the following:
select
documentco0_.CONTENT_ID as CONTENT1_63_0_,
documentco0_.TSTAMP as TSTAMP63_0_,
documentco0_.CONTENT as CONTENT63_0_
from
MySchema.MyTable documentco0_
where
documentco0_.CONTENT_ID=?
[main] TRACE org.hibernate.type.LongType - binding '1768334' to parameter: 1
[main] TRACE org.hibernate.type.TimestampType - returning '2013-08-05 17:31:32' as column: TSTAMP63_0_
[main] TRACE org.hibernate.type.BinaryType - returning '7f587f608090cac6c9c68081818180b380b380807f5b80c3807f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f40808b8880918091818191807f44809f8080818581818181818180808080808080808182838485868788898a8b7f44803590808281838382848385858484808081fd8182838084918592a1b1c18693d1e187a2f194b201112188a3c2314195d25170a4b3e2f202898a969798999aa5a6a7a8a9aab4b5b6b7b8b9bac3c4c5c6c7c8c9cad3d4d5d6d7d8d9dae3e4e5e6e7e8e9eaf3f4f5f6f7f8f9fa030405060708090a12131415161718191a22232425262728292a32333435363738393a42434445464748494a52535455565758595a6162636465666768696a7172737475767778797a7f5a808881818080bf80fef947bf520c730eff25ada7bd007c7f807a460efd87677f805625220aab7f59' as column: CONTENT63_0_
The same DAO operation when run within the application server however returns the following:
select
documentco0_.CONTENT_ID as CONTENT1_63_0_,
documentco0_.TSTAMP as TSTAMP63_0_,
documentco0_.CONTENT as CONTENT63_0_
from
MySchema.MyTable documentco0_
where
documentco0_.CONTENT_ID=?
2013-08-06 12:49:46,484 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:133 nullSafeSet()) - binding '1768334' to parameter: 1
2013-08-06 12:49:46,500 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:172 nullSafeGet()) - returning '2013-08-05 17:31:32' as column: TSTAMP63_0_
2013-08-06 12:49:46,500 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:172 nullSafeGet()) - returning '80d48081818c808080818080808180808099ff0c809a5c9d809a5c9c80828082808080817f587f608090cac6c9c68081808080804818f7ef8081808080808080808080808080808080808080809a5c9c83408c508081' as column: CONTENT63_0_
You can see that the identifier and timestamp are the same in both cases, but the content blob is different: 360 bytes in the first case and 86 bytes in the second case.
The stand-alone application uses a BasicDataSource, while the application on the server uses a JNDI data source. I have verified that the BasicDataSource contains the same JDBC url that is used in the JNDI data source. Both data sources use the same credentials.
The database operation in the application server has a different trace output, using NullableType::nullSafeGet() to display information instead of org.hibernate.type tracing. I'm not sure if that is relevant.
Is there something obvious that I am overlooking here? I can't see why I am getting different results when running the same query on the same database.
edit: on OAS I have configured a JDBC ConnectionPool, that uses connection factory class oracle.jdbc.pool.OracleDataSource, and the JDBC data source is a managed data source pointing to that connection pool.
I'm thinking there may be an issue with different Oracle JDBC drivers? The BasicDataSource for the stand-alone app uses the JDBC driver oracle.jdbc.driver.OracleDriver and the dialect org.hibernate.dialect.Oracle10gDialect. I can't see any place in OAS administration that shows the equivalent values.
Please have a look at this article
Looks like, for some reason, OAS returns only 86 bytes of the BLOB value, unless you specify an Lob handler on your configuration.
You can also have more info on this thread of CodeRanch describing the same issue
Hope this helps!
I try to set up a local development infrastructure and I want to use HSQLDB as a datasource with my WAS 6.1. I already know that I have to use Apache DBCP to get a connection pooling, but I'm stuck when my application tries to get the first connection.
What I've done
In WAS I created a JDBC provider with the class org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS and removed everything from the classpath input field. Then I put commons-dbcp.jar, commons-pool.jar and hsqldb.jar in MYAPPSERVERDIRECTORY/lib/ext.
Then I created a new datasource with that provider. I added the following custom properties:
driver=org.hsqldb.jdbc.JDBCDriver
url=jdbc:hsqldb:file:///C:/mydatabase.db;shutdown=true
user=SA
password=
My Problem
When I run my application and the first connection to the database is made, I get the following exception:
---- Begin backtrace for Nested Throwables
java.sql.SQLException: No suitable driverDSRA0010E: SQL-Status = 08001, Fehlercode = 0
at java.sql.DriverManager.getConnection(DriverManager.java:592)
at java.sql.DriverManager.getConnection(DriverManager.java:196)
at org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS.getPooledConnection(DriverAdapterCPDS.java:205)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper$1.run(InternalGenericDataStoreHelper.java:918)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper.getPooledConnection(InternalGenericDataStoreHelper.java:955)
at com.ibm.ws.rsadapter.spi.WSRdbDataSource.getPooledConnection(WSRdbDataSource.java:1437)
at com.ibm.ws.rsadapter.spi.WSManagedConnectionFactoryImpl.createManagedConnection(WSManagedConnectionFactoryImpl.java:1089)
at com.ibm.ejs.j2c.FreePool.createManagedConnectionWithMCWrapper(FreePool.java:1837)
at com.ibm.ejs.j2c.FreePool.createOrWaitForConnection(FreePool.java:1568)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:2338)
at com.ibm.ejs.j2c.ConnectionManager.allocateMCWrapper(ConnectionManager.java:909)
at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:599)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:439)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:408)
Any tips on this? I suspect I'm using a wrong class from hsqldb, or maybe my JDBC url is wrong...
In the example given in BDCP docs, the org.hsqldb.jdbcDriver class is used as the driver. The org.hsqldb.jdbc.JDBCDriver is supported only in HSQLDB 2.x, but the other class is supported by all versions of HSQLDB.