Hibernate JPQL Update/Delete operations inside a transaction - spring

Using Hibernate (4.2.7.SP1), Spring and Oracle. Noticed that when the last line in the method (JPQL UPDATE) is executed, but before the #Transactional methods ends, dev's name A is commited to the database and it is visible(in selects from another connection)!
#Transactional
public void doInTransaction()
{
User user = userDao.findById("dev");
user.setName("A");
userDao.getEntityManager().createQuery("UPDATE User set name='B'").executeUpdate();
}
Note that User is a subclass of Person, InheritanceType.JOINED, i.e. there are two tables involved, the name field is inherited from Person.
Found some information here http://in.relation.to/Bloggers/MultitableBulkOperations explaining how hibernate performs the UPDATES and DELETES, and that for inheritance tables it creates temporary tables prefixed with HT_.
Performed some debugging, the issue as I see it can be represented in two lines:
update ILC_PERSON set name = 'A';
create global temporary table HT_ILC_PERSON_USER (id varchar2(255 char) not null) on commit delete rows;
-- bellow should execute the JPQL UPDATE User set name='B'
What happens is that when the DDL for creating the temporary table is executed, oracle commits automatically the previous DML.
Question:
is this a Hibernate bug?
is there some misconfiguration in the project (using LocalContainerEntityManagerFactoryBean with JpaTransactionManager)?
does this simple means we cannot use JPQL UPDATE/DELETES for entities with InheritanceType.JOINED in one transaction?
something else?

Related

Re insert records in Oracle table with Auto generated identifiers using Hibernate

I have few tables in my database where the primary keys are auto generated using Hibernate seqhilo generator configuration. We need to archive these records and at a later point, should be able to restore them in case of a business scenario. My question is if I restore these tables with simple insert statements will that suffice or should I worry about the sequence generator? I would like to have the same ID and not a new generated one. To be clear these re-inserts will happen via direct SQL and not via Hibernate.

Need help understanding the behaviour of SELECT ... FOR UPDATE causing a deadlock

I have two concurrent transactions executing this bit of code (simplified for illustration purposes):
#Transactional
public void deleteAccounts() {
List<User> users = em.createQuery("select u from User", User.class)
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.getResultList();
for (User user : users) {
em.remove(user);
}
}
My understanding is that one of the transactions, say transaction A, should execute the SELECT first, lock all the rows it needs and then go on with the DELETEs while the other transaction should wait for A's commit before performing the SELECT. However, this code is deadlocking. Where am I wrong?
The USER table probably has a lot of foreign keys referring to it. If any of them are un-indexed Oracle will lock the entire child table while it deletes the row from the parent table. If multiple statements run at the same time, even for a different user, the same child tables will be locked. Since the order of those recursive operations cannot be controlled it is possible that multiple sessions will lock the same resources in a different order, causing a deadlock.
See this section in the Concepts manual for more information.
To resolve this, add indexes to any un-indexed foreign keys. If the column names are standard a script like this could help you find potential candidates:
--Find un-indexed foreign keys.
--
--Foreign keys.
select owner, table_name
from dba_constraints
where r_constraint_name = 'USER_ID_PK'
and r_owner = 'THE_SCHEMA_NAME'
minus
--Tables with an index on the relevant column.
select table_owner, table_name
from dba_ind_columns
where column_name = 'USER_ID';
When you use a PESSIMISTIC_WRITE JPA generally traslate it to SELECT FOR UPDATE this make a lock in the database, not necessary for a row it depends of the database and how you configure the lock, by default the lock is by page or block not for row, so check your database documentation to confirm the how your database make the lock, also you can change it so you can apply the lock for a row.
When you call the method deleteAccounts it starts a new transaction and the lock will be active until the transaction commit (or rollback) in this case when the method has finished, if other transaction want to acquire the same lock it can't and I think this is why you have the dead lock, I suggest you to try annother mechanism maybe an optimistic lock, or a lock by entity.
You can try given a timeout to the acquire the lock so:
em.createQuery("select u from User", User.class)
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 5000)
.getResultList();
I found a good article that explains better this error, it is cause by the database:
Oracle automatically detects deadlocks and resolves them by rolling
back one of the transactions/statements involved in the deadlock, thus
releasing one set of resources/data locked by that transaction. The
session that is rolled back will observe Oracle error: ORA-00060:
deadlock detected while waiting for resource. Oracle will also produce
detailed information in a trace file under database's UDUMP directory.
Most commonly these deadlocks are caused by the applications that
involve multi table updates in the same transaction and multiple
applications/transactions are acting on the same table at the same
time. These multi-table deadlocks can be avoided by locking tables in
same order in all applications/transactions, thus preventing a
deadlock condition.

Oracle Database creation with exitsting tables

I've database with 15 tables. Now due to development process one column has to added to all the tables in the database. This changes should not affect the existing process because some other services are also consuming this database. So to accomplish it I thought of creating a new database. Is there are any other way to do it.
Usually it should be enough to create a new schema ("user") and create the tables in that new schema. In Oracle, identically named tables can exist in several schemas.
CREATE USER xxx IDENTIFIED BY yyy
you can create another schema for development and import the table to new schema.Developer should use the development schema instead of production schema.you can also create new database and import from current database but it might be last option
What's wrong with alter table T add (COL varchar2(5)); ?
Of course dependend stored procedures or packages become invalid.
You can leave them alone, then the first call would return an exception and auto-recompile the called procedure. Or you can alter procedure P compile;.

Hibernate `assigned` strategy returns 0 with sequence and trigger

I've Table uses Trigger and sequence to set its PK column.
The Hibernate mapping strategy for its Pk is assigned..
This yields in session.save(obj) returns object with id=0
How to make it returns the correct assigned PK value.
session.getIdentifier() doesn't work!
assigned means: Nobody generates the ID, the ID is set explicitely in the entity before persisting it.
What you want to do is impossible. Hibernate would have to insert an entity without knowing its ID, then the database would generate the ID, and Hibernate would have to reload the entity from the database to know its ID. But how would it reload the entity without knowing its ID?
The native generator does the same thing, and it works because the database provides a getLastGeneratedId() method which allows getting the IOD that the database has generated. But you can't do that with Oracle and a trigger.
Remove the trigger from the database, use the sequence generator, and everything will be fine.

Where will the record get inserted first?

I have a schema called "CUSTOMERS". In this schema there is a table called RECEIVABLES.
There is another schema called "ACCOUNTS". In this schema, there is a table called RECEIVABLES_AC.
RECEIVABLES_AC has a public synoym called RECEIVABLES.
The table structure of both the tables is exactly the same.
If your front-end uses the customer schema credentials to establish a connection, how can you ensure that the record will get inserted in RECEIVABLES_AC without changing the front-end code.
I think this is a trick question. Short of renaming the table RECEIVABLES in the CUSTOMERS schema, I don't see how this can be done.
The only way that I can think of (without changing the login or insert statement) is to use a database trigger that runs on login and changes the current schema to ACCOUNTS:
create or replace trigger logon_set_schema
AFTER LOGON ON DATABASE
BEGIN
if sys_context('USERENV','SESSION_USER') = 'CUSTOMERS' then
execute immediate 'alter session set current_schema=accounts';
end if
END;
/
However, this would likely break other aspects of the code, so changing the application to specify the schema name would be vastly preferable.
What isn't specified is if the behavior is supposed to be instead-of or in-addition-to.
Use replication on ACCOUNTS.RECEIVABLES to propagate DML to CUSTOMER.RECEIVABLES_AC. Triggers, streams, what have you.
Use the ALTER SESSION SET CURRENT_SCHEMA statement to change the default namespace of the user's session.
The right way to respond is to fix the design, and to not have multiple receivables tables with public schemas floating about.
Two good ways to solve this problem are:
Option 1
Rename CUSTOMERS.RECEIVABLES.
Drop the public synonym.
Create a private synonym in the CUSTOMERS schema, called RECEIVABLES that points to ACCOUNTS.RECEIVABLES_AC.
Option 2
Change the front-end to refer to RECEIVABLES_AC instead of RECEIVABLES.
Create a private synonym in the CUSTOMERS schema, called RECEIVABLES_AC that points to ACCOUNTS.RECEIVABLES_AC.
I would prefer Option 2. Private synonyms are a great way of controlling which tables are used by a particular schema, without having to hard-code the schema name in the app.

Resources