I have a strange issue when I try to use Datanucleus to access an Oracle database.
In short, what happens is this :
I run my application; when datanucleus initializes, it complains that it cannot find the tables (although they are in there).
I stop the application, I drop the tables, I add the
datanucleus.autoCreateSchema = true
...property in persistence.xml, and everything works - tables are created and then the select works.
I stop the application again, and then I try to start it with the above parameter disabled.
The error comes back although it was Datanucleus who created the tables in the first place, and now it complains it can't find them.
also please note that the same setup works with a postgresql database behind, without issues.
Can somebody please help ?
A few details about my setup :
I'm using Oracle thin driver.
My entity classes are annotated like this :
#Entity
#Table(name = "tablename1", schema = "schema2000")
Please note that everything works OK if I remove the schema=... from annotation
Error message is :
16:05:40,216 DEBUG [DataNucleus.Connection] - Setting autocommit=false to connection: com.mchange.v2.c3p0.impl.NewProxyConnection#1dff2e1b
16:05:40,216 DEBUG [DataNucleus.Connection] - Connection "com.mchange.v2.c3p0.impl.NewProxyConnection#1dff2e1b" opened with isolation level "read-committed"
16:05:40,904 DEBUG [DataNucleus.Datastore.Schema] - Check of existence of schema2000.tablename1 returned table type of null
16:05:40,905 DEBUG [DataNucleus.Datastore.Schema] - An error occurred while auto-creating schema elements - rolling back
16:05:41,109 DEBUG [DataNucleus.Connection] - Connection "com.mchange.v2.c3p0.impl.NewProxyConnection#1dff2e1b" non enlisted to a transaction is being committed.
16:05:41,110 DEBUG [DataNucleus.Connection] - Connection "com.mchange.v2.c3p0.impl.NewProxyConnection#1dff2e1b" closed
javax.persistence.PersistenceException: Required table missing : "schema2000.tablename1" in Catalog "" Schema "schema2000". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"
at org.datanucleus.api.jpa.NucleusJPAHelper.getJPAExceptionForNucleusException(NucleusJPAHelper.java:274)
at org.datanucleus.api.jpa.JPAEntityManager.merge(JPAEntityManager.java:519)
Suggest you look closer at case-sensitivity of your identifiers. DataNucleus logs what the JDBC driver allows with a line like
Supported Identifier Cases : "MixedCase" UPPERCASE "MixedCase-Sensitive"
so possibly it requires the schema in UPPERCASE or maybe quoted (all RDBMS are different, and inclusive some differ depending on the operating system they're running on)
Obviously embedding datastore-specific info in annotations is not recommended.
Related
Using Springboot 2.7.2 and Hibernate 5.6 with Oracle 12.2 to write a web application. I use the repository model to do an insert and test with mockmvc. With SQL Debug turned on I get an error ORA 32575 at the point where it executes the insert statement. In the debug log it has INSERT INTO TABLE (COL1, COL2, ID) VALUES ('X','Y',DEFAULT). The Oracle error 32575 follows this. The ID field in question is part of a Hibernate pojo and is a primary key and uses GenerationType.SEQUENCE. It is an Entity that points to a Table.
The DataSource is "thin" driver using the ojdbc8.jar. The Datasource is set up using a #Configuration" annotation in the application during Tomcat startup. If you take all of this by itself I dont get the error above.
However, I have a requirement to connect to each database user through a PROXY USER account because we use Oracle Label Security. It looks something like GRANT CONNECT TO userX THROUGH proxyuser. Using the database driver it would be something like
Properties proxyProps = new Properties()
proxyProps.set(Connection.PROXY_USER_NAME, user)
oraCon.openProxySession(Connection.PROXYTYPE_USER_NAME)
This is being done inside of an application class called ProxyDelegatingDatasourceThin which extends DelegatingDataSource which is a Spring class that I believe gets called when a new connection attempt is made.
Again, queries work fine, updates seem to work, its only the INSERTS. The ID column itself is set to NUMBER and is flagged as a Primary Key. It is not set as any kind of an IDENTITY column.
The error seems to want the ID column to be omitted from the INSERT Statement all together but Hibernate or Spring is generating it with the DEFAULT in the VALUES that is associated to ID.
Im hoping someone can help. Spent days on this.
Set the #ID column to be nullable, insertable, updatable all set to false
Tried using merge and persist from the entity manager (EntityManager) instead of using the Spring Repository save() method.
The implicit caching property is set to true
The error seems to want the ID column to be omitted from the INSERT Statement all together but Hibernate or Spring is generating it with the DEFAULT in the VALUES that is associated to ID.
Adding more info...
When I remove the code above which opens the proxy session, I don't get the error. I also printed some info from the database context while using the proxy session and the PROXY USER is to the PROXY ACCOUNT and the SESSION user is to the user that is connecting through the PROXY ACCOUNT.
Whether I use the Oracle Thin Driver or UCP I get the same result.
I am setting up a Spring-boot application to connect to HP NonStop Tandem's SQL/MX. First I achieved this connection by hard-coding the jdbc parameters like dataSource, URL, etc in the service section of the application and it worked (I was able to access tables by executing query).
Now I am trying to remove the hard coded part and have my database related info in application.properties file, but now I am getting the following error
org.springframework.jdbc.support.MetaDataAccessException: JDBC DatabaseMetaData method not implemented by JDBC driver - upgrade your driver; nested exception is java.lang.AbstractMethodError: Method com/tandem/t4jdbc/SQLMXConnection.isValid(I)Z is abstract
Can someone help me understand the root cause? The same driver jar is being used when hard-coding the datasource details and it worked but not working when having the data source properties in application.properties and needs an upgrade to the jar.
I encountered the same exception when using Spring Data JPA in a Spring Boot application, the JTDS driver and the Hikari connection pool. In my case I discovered that the following fixed the problem:
Examining the class com.zaxxer.hikari.pool.PoolBase, the following can be observed:
this.isUseJdbc4Validation = config.getConnectionTestQuery() == null;
Thus JDBC 4 validation will not be attempted if there is a connection test query configured. In a Spring Boot application, this can be accomplished like this:
spring.datasource.hikari.connection-test-query=select 1;
Regretfully I do not have any experience with the T4SQLMX driver but nevertheless hope this can be of some use.
I recently fought through the same issue, for me I was using a JDBC type 3 driver; but my spring implementation only supported a type 4 driver, thus when the method you linked above was attempted to be called, it caused the error.
I suggest you look for a type 4 driver for your particular database and see if that resolves your issue.
I have an application that uses a JdbcTemplate to perform queries on a MySQL database. If the JdbcTemplate ever throws an org.springframework.dao.DataAccessException, it logs the exception's stack trace. However, I'd also like to include the SQL query that caused the exception to be thrown. Is there an easy way to do this that doesn't involve writing custom error messages for every place JdbcTemplate is used?
If you only intend to log SQL statements during an exception, you might have to write your own custom subclass of JdbcTemplate and alter the logging preconditions as seen in the source code at Github.
If that is not the case, you may consider the following.
From the Spring documentation, All SQL Statements are logged at DEBUG level.
All SQL issued by this class is logged at the DEBUG level under the category corresponding to the fully qualified class name of the template instance (typically JdbcTemplate, but it may be different if you are using a custom subclass of the JdbcTemplate class).
You make also change the Jdbc url by setting profileSQL to true to trace the SQL.
MySQl Connection Reference Documentation
I am currently enhancing an application that uses spring and hibernate.There are multiple instances where the application communicates with the db (postgres) via prepared statements.
The application until now, communicated with postgres via dbcp.
Change:
The application now communicated to postgres via pgbouncer.
i.e.: application -> dbcp -> pgbouncer -> postgres
I understand this wouldn't be the most ideal solution i.e: having 2 poolers. But due to the current architecture we require both of them.
Requirement:
pgbouncer does not support prepared statements in transaction mode & therefore have to be eliminated.
Changes to eliminate prepared statement.
1) psql: VERSION = 9.2.6
no change
2) pgbouncer: In the config file set the following attribures
ignore_startup_parameters=extra_float_digits
pool_mode=transaction
server_reset_query=
3) jdbc : The prepared threshold has been set accordingly.
i.e. : jdbc:postgresql://localhost:6541/postgres?prepareThreshold=0
JAVA VERSION = 1.7.0_51
JDBC DRIVER = postgresql-9.3-1102.jdbc41-3.jar
4) dbcp :
poolPreparedStatements = false
maxOpenPreparedStatements = 0
5) hibernate : no changes
6) spring : no changes
Issue:
Inspite of all these changes I still see prepared statements trying to be created & transactions failing due to that.
"ERROR: prepared statement "S_21" does not exist; nested exception is org.postgresql.util.PSQLException: ERROR: prepared statement "S_21 " does not exist"
I have removed all logical changes that used a prepared statement.
How can I prevent the other prepared statements from being created?
Does spring or hibernate internally create prepared statements for their usage? If yes, how do I disable them?
I understand that this post is from a few years ago but I am still facing the same issues. Unfortunately the suggested changes are not working for my current use case.
Facing the following issue:
- "Error: Prepared statement xxx does not exist"
- "Error: prepared statement xxx already exists"
Tried following the proposed changed but still getting the same error:
Tech Stack:
Spring Boot (2.1.7.RELEASE)
Spring Data (JPA + Hibernate)
The application is deployed on Heorku using the Heroku Postgre
Client side PgBouncer.
Modified the DB url with the following properties: "?sslmode=disable&prepareThreshold=0&preparedStatementCacheQueries=0"
The following settings are added to Heroku config:
PGSSLMODE= disable
PGBOUNCER_POOL_MODE = transaction
PGBOUNCER_IGNORE_STARTUP_PARAMETERS = extra_float_digits
set PGBOUNCER_URLS config value to DB name Urls
Spring Data is set up to use two Databases for (Read/Write & Read).
Using the #Transactional(readOnly=true) with #Around("#annotation(transactional)")
public Object proceed(ProceedingJoinPoint proceedingJoinPoint, Transactional transactional) throws Throwable {
try {
if(transactional.readOnly()) {
RoutingDataSource.setReplicaRoute();
LOGGER.info("Routing database call to the read replica");
}
return proceedingJoinPoint.proceed();
} finally {
RoutingDataSource.clearReplicaRoute();
}
}
The following configuration ia working on my system without any ERROR: prepared statement "S_21" does not exist; errors. Hope it helps:
pgBouncer 1.6.1, pool_mode = transaction
Added to Hibernate db-connection string: prepareThreshold=0
Postgresql-JDBC 9.4-1203-jdbc41 driver
Disable Prepared statements in Hibernate 4.x
<property name="hibernate.cache.use_query_cache">false</property>
I'm seeing something odd when I run a query in an application deployed in Oracle Application Server 10.1.3, with Oracle10g.
When I run a statement against the database directly (e.g. a standalone app that calls a DAO implemented with hibernate) I see the following:
select
documentco0_.CONTENT_ID as CONTENT1_63_0_,
documentco0_.TSTAMP as TSTAMP63_0_,
documentco0_.CONTENT as CONTENT63_0_
from
MySchema.MyTable documentco0_
where
documentco0_.CONTENT_ID=?
[main] TRACE org.hibernate.type.LongType - binding '1768334' to parameter: 1
[main] TRACE org.hibernate.type.TimestampType - returning '2013-08-05 17:31:32' as column: TSTAMP63_0_
[main] TRACE org.hibernate.type.BinaryType - returning '7f587f608090cac6c9c68081818180b380b380807f5b80c3807f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f40808b8880918091818191807f44809f8080818581818181818180808080808080808182838485868788898a8b7f44803590808281838382848385858484808081fd8182838084918592a1b1c18693d1e187a2f194b201112188a3c2314195d25170a4b3e2f202898a969798999aa5a6a7a8a9aab4b5b6b7b8b9bac3c4c5c6c7c8c9cad3d4d5d6d7d8d9dae3e4e5e6e7e8e9eaf3f4f5f6f7f8f9fa030405060708090a12131415161718191a22232425262728292a32333435363738393a42434445464748494a52535455565758595a6162636465666768696a7172737475767778797a7f5a808881818080bf80fef947bf520c730eff25ada7bd007c7f807a460efd87677f805625220aab7f59' as column: CONTENT63_0_
The same DAO operation when run within the application server however returns the following:
select
documentco0_.CONTENT_ID as CONTENT1_63_0_,
documentco0_.TSTAMP as TSTAMP63_0_,
documentco0_.CONTENT as CONTENT63_0_
from
MySchema.MyTable documentco0_
where
documentco0_.CONTENT_ID=?
2013-08-06 12:49:46,484 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:133 nullSafeSet()) - binding '1768334' to parameter: 1
2013-08-06 12:49:46,500 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:172 nullSafeGet()) - returning '2013-08-05 17:31:32' as column: TSTAMP63_0_
2013-08-06 12:49:46,500 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:172 nullSafeGet()) - returning '80d48081818c808080818080808180808099ff0c809a5c9d809a5c9c80828082808080817f587f608090cac6c9c68081808080804818f7ef8081808080808080808080808080808080808080809a5c9c83408c508081' as column: CONTENT63_0_
You can see that the identifier and timestamp are the same in both cases, but the content blob is different: 360 bytes in the first case and 86 bytes in the second case.
The stand-alone application uses a BasicDataSource, while the application on the server uses a JNDI data source. I have verified that the BasicDataSource contains the same JDBC url that is used in the JNDI data source. Both data sources use the same credentials.
The database operation in the application server has a different trace output, using NullableType::nullSafeGet() to display information instead of org.hibernate.type tracing. I'm not sure if that is relevant.
Is there something obvious that I am overlooking here? I can't see why I am getting different results when running the same query on the same database.
edit: on OAS I have configured a JDBC ConnectionPool, that uses connection factory class oracle.jdbc.pool.OracleDataSource, and the JDBC data source is a managed data source pointing to that connection pool.
I'm thinking there may be an issue with different Oracle JDBC drivers? The BasicDataSource for the stand-alone app uses the JDBC driver oracle.jdbc.driver.OracleDriver and the dialect org.hibernate.dialect.Oracle10gDialect. I can't see any place in OAS administration that shows the equivalent values.
Please have a look at this article
Looks like, for some reason, OAS returns only 86 bytes of the BLOB value, unless you specify an Lob handler on your configuration.
You can also have more info on this thread of CodeRanch describing the same issue
Hope this helps!