I'm trying to configure a seqhilo generator for a Hibernate application on Oracle.
<id name="idTest" type="int">
<column name="ID_TEST" precision="6" scale="0" />
<generator class="seqhilo">
<param name="sequence">S_TEST</param>
<param name="max_lo">1000</param>
</generator>
</id>
I created a sequence(S_TEST) in oracle database 10g, Unfortunately its not working: the id is always null.
could you explain to me how to use seqhilo generator within oracle database, may be i'm confused :(
here is the generated sql trace:
08:52:44,441 DEBUG AnnotationTransactionAttributeSource:107 - Adding transactional method [create] with attribute [PROPAGATION_REQUIRED,ISOLATION_DEFAULT]
08:52:44,441 DEBUG HibernateTransactionManager:346 - Using transaction object [org.springframework.orm.hibernate3.HibernateTransactionManager$HibernateTransactionObject#1786a3c]
08:52:44,441 DEBUG HibernateTransactionManager:374 - Creating new transaction with name [com.cylande.utilities.GenericDAO.create]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
08:52:44,472 DEBUG HibernateTransactionManager:496 - Opened new Session [org.hibernate.impl.SessionImpl#1bf68a9] for Hibernate transaction
08:52:44,472 DEBUG HibernateTransactionManager:507 - Preparing JDBC Connection of Hibernate Session [org.hibernate.impl.SessionImpl#1bf68a9]
08:52:44,487 DEBUG DriverManagerDataSource:163 - Creating new JDBC DriverManager Connection to [jdbc:oracle:thin:#localhost:1521:orcl]
08:52:44,519 DEBUG HibernateTransactionManager:572 - Exposing Hibernate transaction as JDBC transaction [oracle.jdbc.driver.T4CConnection#8c7be5]
08:52:44,519 DEBUG TransactionSynchronizationManager:186 - Bound value [org.springframework.jdbc.datasource.ConnectionHolder#11a0d35] for key [org.springframework.jdbc.datasource.DriverManagerDataSource#13b5a3a] to thread [main]
08:52:44,519 DEBUG TransactionSynchronizationManager:186 - Bound value [org.springframework.orm.hibernate3.SessionHolder#12c4c57] for key [org.hibernate.impl.SessionFactoryImpl#1594a88] to thread [main]
08:52:44,519 DEBUG TransactionSynchronizationManager:261 - Initializing transaction synchronization
08:52:44,534 DEBUG TransactionInterceptor:290 - Getting transaction for [com.cylande.utilities.GenericDAO.create]
08:52:44,534 DEBUG TransactionSynchronizationManager:142 - Retrieved value [org.springframework.orm.hibernate3.SessionHolder#12c4c57] for key [org.hibernate.impl.SessionFactoryImpl#1594a88] bound to thread [main]
08:52:44,550 DEBUG SQL:102 - select categorie_.ID_CATEGORIE, categorie_.CATEGORIE as CATEGORIE9_ from AHMED.CATEGORIE categorie_ where categorie_.ID_CATEGORIE=?
08:52:44,550 TRACE IntegerType:128 - binding '1' to parameter: 1
08:52:44,550 TRACE StringType:170 - returning 'Test dintegration' as column: CATEGORIE9_
08:52:44,550 DEBUG TransactionInterceptor:319 - Completing transaction for [com.cylande.utilities.GenericDAO.create]
08:52:44,550 DEBUG HibernateTransactionManager:880 - Triggering beforeCommit synchronization
08:52:44,550 DEBUG HibernateTransactionManager:893 - Triggering beforeCompletion synchronization
08:52:44,550 DEBUG HibernateTransactionManager:707 - Initiating transaction commit
08:52:44,566 DEBUG HibernateTransactionManager:651 - Committing Hibernate transaction on Session [org.hibernate.impl.SessionImpl#1bf68a9]
08:52:44,566 DEBUG SQL:102 - insert into AHMED.TEST (ID_APPLICATION, ID_MODULE, ID_CATEGORIE, ID_PAGE, NOM_TEST, DESCRIPTION_TEST, ID_TEST) values (?, ?, ?, ?, ?, ?, ?)
08:52:44,566 TRACE IntegerType:121 - binding null to parameter: 1
08:52:44,566 TRACE IntegerType:121 - binding null to parameter: 2
08:52:44,566 TRACE IntegerType:128 - binding '1' to parameter: 3
08:52:44,566 TRACE IntegerType:121 - binding null to parameter: 4
08:52:44,566 TRACE StringType:128 - binding 'nomTest2' to parameter: 5
08:52:44,566 TRACE IntegerType:128 - binding '0' to parameter: 7
08:52:44,581 WARN JDBCExceptionReporter:77 - SQL Error: 1, SQLState: 23000
08:52:44,581 ERROR JDBCExceptionReporter:78 - ORA-00001: violation de contrainte unique
the field that must be generated (id_Test) is always equal to zero even if i changed the generator to sequence or native the result remain the same
The declaration looks correct, or at least very close to the sample from the documentation:
5.1.4.2. Hi/lo algorithm
The hilo and seqhilo generators
provide two alternate implementations
of the hi/lo algorithm. The first
implementation requires a "special"
database table to hold the next
available "hi" value. Where supported,
the second uses an Oracle-style
sequence.
<id name="id" type="long" column="cat_id">
<generator class="seqhilo">
<param name="sequence">hi_value</param>
<param name="max_lo">100</param>
</generator>
</id>
But in order to debug the issue, I would try to
activate the logging of the generated SQL to see what is happening exactly (and post the result)
stick to the above example i.e. define the column in the id element and use a long
if this works, start modifying the configuration
Related
I have a spring-boot application where using webflux and r2dbc-postgres. I have discovered a strange issue when trying to do some db operations in a flatMap().
Code example:
#Transactional
public Mono<Void> insertDummyFooBars() {
return Flux.fromIterable(IntStream.rangeClosed(1, 260).boxed().collect(Collectors.toList()))
.log()
.flatMap(i -> this.repository.save(FooBar.builder().foo("test-" + i).build()))
.log()
.concatMap(i -> this.repository.findAll())
.then();
}
It seems like flatMap can process max 256 elements in batches. (Queues.SMALL_BUFFER_SIZE default value is 256). So when I tried to run the code above (with 260 elements) I've got an exception - TransientDataAccessResourceException and the following message:
Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException
There is no Releasing R2DBC Connection after this exception. The pgdb connection/session remains in idle in transaction state and the app is not able to run properly when pool max size is reached and all of the connections are in idle in transaction state. I think the connection should be released even if an exception happened or not.
If I use concatMap instead of flatMap it works as expected - no exception, connection released! It's also ok with flatMap when the elements are less than or equal to 256.
Is it possible to force pgdb connection closure? What should I do If I have lot of db operations in flatMap like this? Should I replace all of them with concatMap? Is there a global solution for this?
Versions:
Postgres: 12.6, Spring-boot: 2.7.6
Demo project
LOG:
2022-12-08 16:32:13.092 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onNext(256)
2022-12-08 16:32:13.092 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [INSERT INTO foo_bar (foo) VALUES ($1)]
2022-12-08 16:32:13.114 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 : onNext(FooBar(id=258, foo=test-1))
2022-12-08 16:32:13.143 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [SELECT foo_bar.* FROM foo_bar]
2022-12-08 16:32:13.143 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | request(1)
2022-12-08 16:32:13.143 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onNext(257)
2022-12-08 16:32:13.144 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [INSERT INTO foo_bar (foo) VALUES ($1)]
2022-12-08 16:32:13.149 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onComplete()
2022-12-08 16:32:13.149 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | cancel()
2022-12-08 16:32:13.160 ERROR 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 : onError(org.springframework.dao.TransientDataAccessResourceException: executeMany; SQL [INSERT INTO foo_bar (foo) VALUES ($1)]; Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException: [08006] Cannot exchange messages because the request queue limit is exceeded)
2022-12-08 16:32:13.167 ERROR 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 :
org.springframework.dao.TransientDataAccessResourceException: executeMany; SQL [INSERT INTO foo_bar (foo) VALUES ($1)]; Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException: [08006] Cannot exchange messages because the request queue limit is exceeded
at org.springframework.r2dbc.connection.ConnectionFactoryUtils.convertR2dbcException(ConnectionFactoryUtils.java:215) ~[spring-r2dbc-5.3.24.jar:5.3.24]
at org.springframework.r2dbc.core.DefaultDatabaseClient.lambda$inConnectionMany$8(DefaultDatabaseClient.java:147) ~[spring-r2dbc-5.3.24.jar:5.3.24]
at reactor.core.publisher.Flux.lambda$onErrorMap$29(Flux.java:7105) ~[reactor-core-3.4.25.jar:3.4.25]
at reactor.core.publisher.Flux.lambda$onErrorResume$30(Flux.java:7158) ~[reactor-core-3.4.25.jar:3.4.25]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94) ~[reactor-core-3.4.25.jar:3.4.25]
I have tried to change the Queues.SMALL_BUFFER_SIZE, and also tried to add a concurrency value to the flatmap. It works when I reduced the value to 255 but I think it is not a good solution.
I have a query where I'd like to retrieve a huge amount of rows based on their pk ids, and use select .... where id in (<ids>) via the fluent api in JDBI like this:
jdbi(db).withHandle(h --> handle.createQuery(SQL).bindList("ids", listOfIds).mapToMap().list();
This works for as long as the number of ids doesn't exceed what the database (DB2) can handle in an in-clause. Obviously, in my case, the list of id's gets longer than Db2 can handle. So I split it into many in a List<List<Integer>> listOfIdLists and create a List<Map<String, Object>> result.
Now I have to somehow iterate over listOfIdLists and for each iteration add the result to result. Here is one of many tested variants:
List<Map<String, Object>> result = new ArrayList<>();
List<List<Integer>> listOfIdLists = chopListToLists(ids, 10);
Iterator<List<Integer>> oneChopIterator = listOfIdLists.iterator();
while (oneChopIterator.hasNext()) {
result.addAll(jdbi(db).withHandle(handle --> handle.createQuery(SQL).bindList("id", oneChopIterator.next()).mapToMap().list()));
}
Obviously variants with chops.forEach and try (Handle h = jdbi(db).open();) { /* iterate and addAll */ } has been tried also.
All of this run in a Quarkus app, and I get exceptions from Arjuna when iterating. For testing, I can add to result without errors when there is no iterating, and I instead just pick the first element/list in chops.
The exception is:
2022-11-02 00:13:15,219 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check processing TX 0:ffff7f000101:925d:6361a7cf:0 in state RUN
2022-11-02 00:13:15,239 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012095: Abort of action id 0:ffff7f000101:925d:6361a7cf:0 invoked while multiple threads active within it.
2022-11-02 00:13:15,242 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012381: Action id 0:ffff7f000101:925d:6361a7cf:0 completed with multiple threads - thread Quarkus Main Thread was in progress with java.base#17.0.4/sun.nio.ch.SocketDispatcher.read0(Native Method)
java.base#17.0.4/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:47)
java.base#17.0.4/sun.nio.ch.NioSocketImpl.tryRead(NioSocketImpl.java:261)
...
2022-11-02 00:13:15,244 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012108: CheckedAction::check - atomic action 0:ffff7f000101:925d:6361a7cf:0 aborting with 1 threads active!
2022-11-02 00:13:15,315 WARN [io.agr.pool] (Transaction Reaper Worker 0) Datasource 'fs': JDBC resources leaked: 0 ResultSet(s) and 1 Statement(s)
2022-11-02 00:13:15,340 ERROR [no.cen.bat.dbs.Runner] (Quarkus Main Thread) dbsync 2 failed with throwable Unable to advance result set [statement:"select ...
...
2022-11-02 00:13:15,360 ERROR [no.cen.bat.dbs.Runner] (Quarkus Main Thread) org.jdbi.v3.core.result.ResultSetException: Unable to advance result set [statement:"select ...
...
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.31.10] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.b7.a(b7.java:794)
...
2022-11-02 00:13:15,739 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check processing TX 0:ffff7f000101:925d:6361a7cf:0 in state CANCEL
2022-11-02 00:13:15,741 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012378: ReaperElement appears to be wedged: java.base#17.0.4/sun.nio.ch.SocketDispatcher.read0(Native Method)
java.base#17.0.4/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:47)
...
2022-11-02 00:13:16,242 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check processing TX 0:ffff7f000101:925d:6361a7cf:0 in state CANCEL_INTERRUPTED
2022-11-02 00:13:16,244 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012120: TransactionReaper::check worker Thread[Transaction Reaper Worker 0,5,main] not responding to interrupt when cancelling TX 0:ffff7f000101:925d:6361a7cf:0 -- worker marked as zombie and TX scheduled for mark-as-rollback
2022-11-02 00:13:21,478 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012110: TransactionReaper::check successfuly marked TX 0:ffff7f000101:925d:6361a7cf:0 as rollback only
2022-11-02 00:13:21,479 WARN [com.arj.ats.arjuna] (Quarkus Main Thread) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000101:925d:6361a7cf:0
2022-11-02 00:13:21,480 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012113: TransactionReaper::doCancellations worker Thread[Transaction Reaper Worker 0,5,main] missed interrupt when cancelling TX 0:ffff7f000101:925d:6361a7cf:0 -- exiting as zombie (zombie count decremented to 0)
This runs on Java 17 with latest Quarkus (2.13.3) and the ibm-drivers that come with quarkus-jdbc-db2. Jdbi version 3.34.0. Not running a native image.
The reason for the parameterized jdbi(db) is that the application has two datasources. It is replacing a db-job where two databases are linked and stuff copied from one database to the other with statements like insert into db1.mytable(a,b,c...) select a,b,c ... from db2.mytable where not exists (...);
The source db is on linux and the target on zOs. The app runs on Ubuntu 20.04.
So basically what the code do, is to retrieve all pk ids from each table in each database, use CollectionUtils.subtract(list1, list2) to identify ids missing from the target table, and then use the resulting list of ids to retrive the full rows via a select ... from ... where ids in (<ids>) query as described above. The list of Map<String, Object> which would be the result rows, would then be inserted into the other table where they are missing.
The question is; How to get this working without exceptions? I can brute-force this by deleting and inserting all rows, but I'd rather not.
The stacktrace element:
2022-11-02 00:13:15,244 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012108: CheckedAction::check - atomic action 0:ffff7f000101:925d:6361a7cf:0 aborting with 1 threads active!
shows the narayana Transaction Reaper thread, it's responsible for timing out transactions, attempting to rollback the transaction. The text "atomic action 0:ffff7f000101:925d:6361a7cf:0 aborting with 1 threads active!" is saying that there is an application thread still running with the transaction context bound to the application thread. This is normal application behavior and the usual fix for this problem is to either extend the timeout, do less work inside the transaction add more compute/networking resources to the setup.
I am using spring-data-rest & hibernate to expose a table "File(Id, Name)" from a SAP IQ(Sybase IQ) database. The below error occurs when I do a "GET" on the "File" table using "curl http://localhost:8080/files/1".
2022-07-20 20:01:03 WARN [http-nio-8081-exec-1] o.h.e.jdbc.spi.SqlExceptionHelper - SQL Error: 0, SQLState: JZ0SA
2022-07-20 20:01:03 ERROR [http-nio-8081-exec-1] o.h.e.jdbc.spi.SqlExceptionHelper - JZ0SA: Prepared Statement: Input parameter not set, index: 0.
2022-07-20 20:01:03 INFO [http-nio-8081-exec-1] o.h.e.i.DefaultLoadEventListener - HHH000327: Error performing load command
org.hibernate.exception.GenericJDBCException: could not extract ResultSet
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:67)
I have the same table in "Sybase 16" but it is working perfectly with the same code base.
Anyone faced similar issue?
Thanks in advance
I am running Cassandra and have about 20k records in it to play with. I am trying to run a filter in pig on this data but am getting the following message back:
2015-07-23 13:02:23,559 [Thread-4] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
java.lang.RuntimeException: com.datastax.driver.core.exceptions.InvalidQueryException: Expected 8 or 0 byte long (1)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initNextRecordReader(PigRecordReader.java:260)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:205)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Expected 8 or 0 byte long (1)
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:263)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:44)
at org.apache.cassandra.hadoop.cql3.CqlRecordReader$RowIterator.(CqlRecordReader.java:259)
at org.apache.cassandra.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:151)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initNextRecordReader(PigRecordReader.java:256)
... 7 more
You would think this is an obvious error, and believe me there are a ton of results on google for this. It's clear that some piece of my data isn't conforming to the expected type of a given column. What I don't understand is 1.) why this is happening, and 2.) how to debug it. If I try to insert invalid data into Cassandra from my nodejs app, it will throw this kind of error if my data type doesn't match the columns data type, which means that this shouldn't be possible? I've read that data validation using UTF8 is wonky and that setting a different kind of validation is the answer, but I don't know how to do that. Here are my steps to reproduce:
grunt> define CqlNativeStorage org.apache.cassandra.hadoop.pig.CqlNativeStorage();
grunt> test = load 'cql://blah/blahblah' USING CqlNativeStorage();
grunt> describe test;
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - Found ksDef name: blah
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - partition keys: ["ad_id"]
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - cluster keys: []
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - row key validator: org.apache.cassandra.db.marshal.UTF8Type
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - cluster key validator: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type)
blahblah: {ad_id: chararray,address: chararray,city: chararray,date_created: long,date_listed: long,fireplace: bytearray,furnished: bytearray,garage: bytearray,neighbourhood: chararray,num_bathrooms: int,num_bedrooms: int,pet_friendly: bytearray,postal_code: chararray,price: double,province: chararray,square_feet: int,url: chararray,utilities_included: bytearray}
grunt> query1 = FILTER blahblah BY city == 'New York';
grunt> dump query1;
Then it runs for awhile and dumps out tons of logs and the error appears.
Discovered my problem: the pig partioner did not match CQL3, and therefore the data was being parsed incorrectly. Previously the environment variable was PIG_PARTITIONER=org.apache.cassandra.dht.RandomPartitioner. After I changed it to PIG_PARTITIONER=org.apache.cassandra.dht.Murmur3Partitioner it started working.
I am having an issue with guice-persist and guice-servlet (http-request scoped jpa sessions) where I attempt to update an entity's value and persist that update, but the update is never persisted to the database. I have tried forcing the write with an entityManager.flush() and entityManager.getTransaction().commit(), but when I look in the logs nothing seems to happen, even when the http session ends and the jdbc connection is released.
I would normally expect to see hibernate issuing a sql update statement, but the update never seems to register. What strikes me as odd is that I have no problem creating new entities, this only seems to be effecting updates.
I have a Singleton scoped servlet that has an injected UserDao, which uses an injected Provider<EntityManager>.
Here is my persistence.xml:
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
version="2.0">
<persistence-unit name="db-manager">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<class>com.turms.server.database.DurableUser</class>
<properties>
<!-- Disable the second-level cache -->
<property name="hibernate.cache.use_second_level_cache" value="false"/>
<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/>
<!--<property name="hibernate.connection.driver_class" value="org.apache.derby.jdbc.EmbeddedDriver"/>-->
<property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/TestService"/>
<property name="hibernate.connection.username" value="xxxxx"/>
<property name="hibernate.connection.password" value="xxxxx"/>
<property name="hibernate.connection.pool_size" value="1"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/>
<property name="hibernate.hbm2ddl.auto" value="create"/>
<property name="hibernate.archive.autodetection" value="class"/>
<property name="hibernate.show_sql" value="true"/>
<!-- Default is false for backwards compatibility. Should be used on all new projects -->
<property name="hibernate.id.new_generator_mappings" value="true"/>
</properties>
</persistence-unit>
I have replicated this problem using MySQL and Derby databases.
Here is an example of an update attempt that fails:
public boolean testUpdate(DurableUser user) {
entityManager.get().getTransaction().begin();
String testUpdateString = "askdjfaskdjfalsdkf";
user.setField(testUpdateString);
entityManager.get().persist(user);
log.info("user field persisted");
entityManager.get().flush();
entityManager.get().getTransaction().commit();
entityManager.get().clear();
return true;
}
It takes a DurableUser (created w/o issue in a previous http session) and updates a field. Even with an explicit flush() and commit(), hibernate issues no update statement.
I noticed in the logs that org.hibernate.internal.util.EntityPrinter does log the user's toString(), which shows the updated field. Would that mean hibernate does recognize the entity has been dirtied and is still not persisting the changes?
Can anybody answer why I can successfully create new entities but not update existing entities? I'm completely stumped so far.
EDIT:
Here are the logs from the session:
2013-12-04 20:56:10,881 DEBUG http-apr-8080-exec-7 spi.AbstractTransactionImpl - begin
2013-12-04 20:56:10,881 DEBUG http-apr-8080-exec-7 jdbc.JdbcTransaction - initial autocommit status: true
2013-12-04 20:56:10,881 DEBUG http-apr-8080-exec-7 jdbc.JdbcTransaction - disabling autocommit
2013-12-04 20:56:10,881 INFO http-apr-8080-exec-7 dao.UserDao - user field persisted
2013-12-04 20:56:10,881 DEBUG http-apr-8080-exec-7 .AbstractFlushingEventListener - Processing flush-time cascades
2013-12-04 20:56:10,881 DEBUG http-apr-8080-exec-7 .AbstractFlushingEventListener - Dirty checking collections
2013-12-04 20:56:10,882 DEBUG http-apr-8080-exec-7 internal.Collections - Collection found: [com.xxx.server.database.DurableUser.cards#7], was: [com.xxx.server.database.DurableUser.cards#7] (initialized)
2013-12-04 20:56:10,882 DEBUG http-apr-8080-exec-7 internal.Collections - Collection found: [com.xxx.server.database.DurableUser.donations#7], was: [com.xxx.server.database.DurableUser.donations#7] (initialized)
2013-12-04 20:56:10,882 DEBUG http-apr-8080-exec-7 internal.Collections - Collection found: [com.xxx.server.database.DurableUser.installments#7], was: [com.xxx.server.database.DurableUser.installments#7] (initialized)
2013-12-04 20:56:10,882 DEBUG http-apr-8080-exec-7 .AbstractFlushingEventListener - Flushed: 0 insertions, 0 updates, 0 deletions to 1 objects
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 .AbstractFlushingEventListener - Flushed: 0 (re)creations, 0 updates, 0 removals to 3 collections
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 util.EntityPrinter - Listing entities:
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 util.EntityPrinter - com.xxx.server.database.DurableUser{donations=[], installments=[], id=7, username=test, name=test testerson, passwordChangeKey=askdjfaskdjfalsdkf, cards=[]}
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 spi.AbstractTransactionImpl - committing
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 .AbstractFlushingEventListener - Processing flush-time cascades
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 .AbstractFlushingEventListener - Dirty checking collections
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 internal.Collections - Collection found: [com.xxx.server.database.DurableUser.cards#7], was: [com.xxx.server.database.DurableUser.cards#7] (initialized)
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 internal.Collections - Collection found: [com.xxx.server.database.DurableUser.donations#7], was: [com.xxx.server.database.DurableUser.donations#7] (initialized)
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 internal.Collections - Collection found: [com.xxx.server.database.DurableUser.installments#7], was: [com.xxx.server.database.DurableUser.installments#7] (initialized)
2013-12-04 20:56:10,883 DEBUG http-apr-8080-exec-7 .AbstractFlushingEventListener - Flushed: 0 insertions, 0 updates, 0 deletions to 1 objects
2013-12-04 20:56:10,884 DEBUG http-apr-8080-exec-7 .AbstractFlushingEventListener - Flushed: 0 (re)creations, 0 updates, 0 removals to 3 collections
2013-12-04 20:56:10,884 DEBUG http-apr-8080-exec-7 util.EntityPrinter - Listing entities:
2013-12-04 20:56:10,886 DEBUG http-apr-8080-exec-7 util.EntityPrinter - com.xxx.server.database.DurableUser{donations=[], installments=[], id=7, username=test, name=test testerson, passwordChangeKey=askdjfaskdjfalsdkf, cards=[]}
2013-12-04 20:56:10,886 DEBUG http-apr-8080-exec-7 jdbc.JdbcTransaction - committed JDBC Connection
2013-12-04 20:56:10,886 DEBUG http-apr-8080-exec-7 jdbc.JdbcTransaction - re-enabling autocommit
2013-12-04 20:56:11,613 DEBUG http-apr-8080-exec-7 internal.LogicalConnectionImpl - Releasing JDBC connection
2013-12-04 20:56:11,614 DEBUG http-apr-8080-exec-7 internal.LogicalConnectionImpl - Released JDBC connection