Because of the limitations of the VARCHAR2 type in Oracle, namely, its limit of 4000 bytes in size, I want to use the CLOB type for a column in a certain entity. So, after reading the documentation, I declared the field in my JPA entity this way:
#Column
#Lob
private String log;
I don't have any problems persisting this entity, but when I try to retrieve it with a simple SELECT m FROM Meeting m, EclipseLink complains about not being able to convert the type oracle.sql.CLOB to java.lang.String:
javax.persistence.PersistenceException: Exception [EclipseLink-3001]
(Eclipse Persistence Services - 2.5.0.v20130507-3faac2b):
org.eclipse.persistence.exceptions.ConversionException Exception
Description: The object [oracle.sql.CLOB#e9fbb9], of class [class
oracle.sql.CLOB], could not be converted to [class java.lang.String].
Internal Exception: java.lang.IllegalStateException: This web
container has not yet been started
I'm using Eclipselink 2.5.1 as my implementation of JPA. The database is Oracle 10g (10.2.0.3.0) and I'm using ojdbc6.jar as driver
Did you specify the target database platform to use? See Specifying the Database for an Application and Java Persistence API (JPA) Extensions Reference for EclipseLink, Release 2.4 for information. If it is not set, try setting it to Oracle10.
Related
I am setting up a Spring-boot application to connect to HP NonStop Tandem's SQL/MX. First I achieved this connection by hard-coding the jdbc parameters like dataSource, URL, etc in the service section of the application and it worked (I was able to access tables by executing query).
Now I am trying to remove the hard coded part and have my database related info in application.properties file, but now I am getting the following error
org.springframework.jdbc.support.MetaDataAccessException: JDBC DatabaseMetaData method not implemented by JDBC driver - upgrade your driver; nested exception is java.lang.AbstractMethodError: Method com/tandem/t4jdbc/SQLMXConnection.isValid(I)Z is abstract
Can someone help me understand the root cause? The same driver jar is being used when hard-coding the datasource details and it worked but not working when having the data source properties in application.properties and needs an upgrade to the jar.
I encountered the same exception when using Spring Data JPA in a Spring Boot application, the JTDS driver and the Hikari connection pool. In my case I discovered that the following fixed the problem:
Examining the class com.zaxxer.hikari.pool.PoolBase, the following can be observed:
this.isUseJdbc4Validation = config.getConnectionTestQuery() == null;
Thus JDBC 4 validation will not be attempted if there is a connection test query configured. In a Spring Boot application, this can be accomplished like this:
spring.datasource.hikari.connection-test-query=select 1;
Regretfully I do not have any experience with the T4SQLMX driver but nevertheless hope this can be of some use.
I recently fought through the same issue, for me I was using a JDBC type 3 driver; but my spring implementation only supported a type 4 driver, thus when the method you linked above was attempted to be called, it caused the error.
I suggest you look for a type 4 driver for your particular database and see if that resolves your issue.
I am getting the following error below
Unexpected error
org.springframework.transaction.CannotCreateTransactionException:
**CannotCreateTransactionException:** Could not open Hibernate Session for transaction; nested exception is org.hibernate.exception.GenericJDBC
Exception: Could not open connection
Here is the scenario in which it occurs :
I have recently moved some table from mysql to mongo. The code is written in such a way that either data would be taken from mongo/mysql.
The code is written in a method block which is annotated with #Transactional provided by spring framework.
There is hibernate layer which is using transaction provided by spring. c3p0 is the connection pool.
The parameter of connection pool is
hibernate.c3p0.min_size=5
hibernate.c3p0.timeout=1200
hibernate.c3p0.max_size=35
hibernate.c3p0.max_statements=50
The problem comes when we try to pull the data from mongo.Looks like the transaction is not getting closed because of mongo operation.The database connection is not getting released .It reaches the max size defined in the pool.
Tried the query in DB to find out the connection
show status like '%onn%';
Any suggestion to resolve this would really help.
Thanks
I have an application deployed in a Weblogic 12g, and a datasource configured against an oracle RAC.
It uses as JPA provider; EclipseLink 2.4.1.
The problem is this exception;
Caused by: Exception [EclipseLink-3001] (Eclipse Persistence Services - 2.4.1.v20121003-ad44345): org.eclipse.persistence.exceptions.ConversionException
Exception Description: The object [weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB#87a98a], of class [class weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB], could not be converted to [class java.lang.String].
Internal Exception: java.lang.ClassCastException: oracle.sql.CLOB
....
Caused by: java.lang.ClassCastException: oracle.sql.CLOB
at weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB.length(Unknown Source)
at org.eclipse.persistence.internal.helper.ConversionManager.convertObjectToString(ConversionManager.java:679)
at org.eclipse.persistence.internal.helper.ConversionManager.convertObject(ConversionManager.java:100)
....
Well, I understand what is the problem but I can't understand why only occurs when some Oracle instance in the Oracle RAC was reset. When this occurs, the only thing to do is restart the WLS instance. ah! While I have this error, the rest of the datatypes work well!.
Maybe there are some kind of direct relaction between LOB types and instances of Oracle servers inside a RAC? It's possible some kind of scenario that restarting an Oracle instance inside a RAC the CLOB datatypes don't works?
Thanks in advance!
Best regards
I'm seeing something odd when I run a query in an application deployed in Oracle Application Server 10.1.3, with Oracle10g.
When I run a statement against the database directly (e.g. a standalone app that calls a DAO implemented with hibernate) I see the following:
select
documentco0_.CONTENT_ID as CONTENT1_63_0_,
documentco0_.TSTAMP as TSTAMP63_0_,
documentco0_.CONTENT as CONTENT63_0_
from
MySchema.MyTable documentco0_
where
documentco0_.CONTENT_ID=?
[main] TRACE org.hibernate.type.LongType - binding '1768334' to parameter: 1
[main] TRACE org.hibernate.type.TimestampType - returning '2013-08-05 17:31:32' as column: TSTAMP63_0_
[main] TRACE org.hibernate.type.BinaryType - returning '7f587f608090cac6c9c68081818180b380b380807f5b80c3807f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f40808b8880918091818191807f44809f8080818581818181818180808080808080808182838485868788898a8b7f44803590808281838382848385858484808081fd8182838084918592a1b1c18693d1e187a2f194b201112188a3c2314195d25170a4b3e2f202898a969798999aa5a6a7a8a9aab4b5b6b7b8b9bac3c4c5c6c7c8c9cad3d4d5d6d7d8d9dae3e4e5e6e7e8e9eaf3f4f5f6f7f8f9fa030405060708090a12131415161718191a22232425262728292a32333435363738393a42434445464748494a52535455565758595a6162636465666768696a7172737475767778797a7f5a808881818080bf80fef947bf520c730eff25ada7bd007c7f807a460efd87677f805625220aab7f59' as column: CONTENT63_0_
The same DAO operation when run within the application server however returns the following:
select
documentco0_.CONTENT_ID as CONTENT1_63_0_,
documentco0_.TSTAMP as TSTAMP63_0_,
documentco0_.CONTENT as CONTENT63_0_
from
MySchema.MyTable documentco0_
where
documentco0_.CONTENT_ID=?
2013-08-06 12:49:46,484 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:133 nullSafeSet()) - binding '1768334' to parameter: 1
2013-08-06 12:49:46,500 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:172 nullSafeGet()) - returning '2013-08-05 17:31:32' as column: TSTAMP63_0_
2013-08-06 12:49:46,500 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:172 nullSafeGet()) - returning '80d48081818c808080818080808180808099ff0c809a5c9d809a5c9c80828082808080817f587f608090cac6c9c68081808080804818f7ef8081808080808080808080808080808080808080809a5c9c83408c508081' as column: CONTENT63_0_
You can see that the identifier and timestamp are the same in both cases, but the content blob is different: 360 bytes in the first case and 86 bytes in the second case.
The stand-alone application uses a BasicDataSource, while the application on the server uses a JNDI data source. I have verified that the BasicDataSource contains the same JDBC url that is used in the JNDI data source. Both data sources use the same credentials.
The database operation in the application server has a different trace output, using NullableType::nullSafeGet() to display information instead of org.hibernate.type tracing. I'm not sure if that is relevant.
Is there something obvious that I am overlooking here? I can't see why I am getting different results when running the same query on the same database.
edit: on OAS I have configured a JDBC ConnectionPool, that uses connection factory class oracle.jdbc.pool.OracleDataSource, and the JDBC data source is a managed data source pointing to that connection pool.
I'm thinking there may be an issue with different Oracle JDBC drivers? The BasicDataSource for the stand-alone app uses the JDBC driver oracle.jdbc.driver.OracleDriver and the dialect org.hibernate.dialect.Oracle10gDialect. I can't see any place in OAS administration that shows the equivalent values.
Please have a look at this article
Looks like, for some reason, OAS returns only 86 bytes of the BLOB value, unless you specify an Lob handler on your configuration.
You can also have more info on this thread of CodeRanch describing the same issue
Hope this helps!
I try to set up a local development infrastructure and I want to use HSQLDB as a datasource with my WAS 6.1. I already know that I have to use Apache DBCP to get a connection pooling, but I'm stuck when my application tries to get the first connection.
What I've done
In WAS I created a JDBC provider with the class org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS and removed everything from the classpath input field. Then I put commons-dbcp.jar, commons-pool.jar and hsqldb.jar in MYAPPSERVERDIRECTORY/lib/ext.
Then I created a new datasource with that provider. I added the following custom properties:
driver=org.hsqldb.jdbc.JDBCDriver
url=jdbc:hsqldb:file:///C:/mydatabase.db;shutdown=true
user=SA
password=
My Problem
When I run my application and the first connection to the database is made, I get the following exception:
---- Begin backtrace for Nested Throwables
java.sql.SQLException: No suitable driverDSRA0010E: SQL-Status = 08001, Fehlercode = 0
at java.sql.DriverManager.getConnection(DriverManager.java:592)
at java.sql.DriverManager.getConnection(DriverManager.java:196)
at org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS.getPooledConnection(DriverAdapterCPDS.java:205)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper$1.run(InternalGenericDataStoreHelper.java:918)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper.getPooledConnection(InternalGenericDataStoreHelper.java:955)
at com.ibm.ws.rsadapter.spi.WSRdbDataSource.getPooledConnection(WSRdbDataSource.java:1437)
at com.ibm.ws.rsadapter.spi.WSManagedConnectionFactoryImpl.createManagedConnection(WSManagedConnectionFactoryImpl.java:1089)
at com.ibm.ejs.j2c.FreePool.createManagedConnectionWithMCWrapper(FreePool.java:1837)
at com.ibm.ejs.j2c.FreePool.createOrWaitForConnection(FreePool.java:1568)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:2338)
at com.ibm.ejs.j2c.ConnectionManager.allocateMCWrapper(ConnectionManager.java:909)
at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:599)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:439)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:408)
Any tips on this? I suspect I'm using a wrong class from hsqldb, or maybe my JDBC url is wrong...
In the example given in BDCP docs, the org.hsqldb.jdbcDriver class is used as the driver. The org.hsqldb.jdbc.JDBCDriver is supported only in HSQLDB 2.x, but the other class is supported by all versions of HSQLDB.