Keeping tables consistent during trigger execution? - oracle

I have a trigger that checks another couple of tables before allowing a row to be inserted. However between the time I check the other tables and insert the row the other tables may get updated.
How do I ensure the tables I'm checking remain in a consistent state until after the new row is inserted? I was thinking of taking locks out but everything I've read boils down to if you are not leaving locking to Oracle you're almost certainly doing it wrong.

Oracle is already doing this for you, when you perform a select it will look at all tables as of the time the transaction started ( the time of the first DML ). This wont stop the data from being changed under you though, your transaction just wont see it being changed. If you want to stop that data from being changed then you can use "SELECT FOR UPDATE" as Justin Cave suggests.
I would seriously question what you are doing though, triggers, except in the most trivial cases, almost always lead to unexpected side effects.

Related

Operations on certain tables won't finish

We have a table TRANSMISSIONS(ID, NAME) which behaves funny in the following ways:
The statement to add a foreign key in another table referencing TRANSMISSIONS.ID won't finish
The statement to add a column to TRANSMISSIONS won't finish
The statement to disable/drop a unique constraint won't finish
The statement to disable/drop a trigger won't finish
TRANSMISSION's primary key is ID, there is also a unique constraint on NAME - therefore there are indexes on ID and NAME. We also have a trigger which creates values for column ID using a sequence, so that INSERT statements do not need to provide a value for ID.
Besides TRANSMISSIONS, there are two more tables behaving like this. For other tables, the above-mentioned statements work fine.
The database is used in an application with Hibernate and due to an incorrect JPA configuration we produced high values for ID during a time. Note that we use the trigger only for "manual" INSERT statements and that Hibernate produces ID values itself, also using the sequence.
The first thought was that the problems were due to the high IDs but we have the problems also with tables that never had such high IDs.
Anyways we suspected that the indexes might be fragmented somehow and called ALTER INDEX TRANSMISSIONS_PK SHRINK SPACE COMPACT, which ran through but showed no effect.
Also, we wanted to call ALTER TABLE TRANSMISSIONS SHRINK SPACE COMPACT which didn't work because we needed to call first ALTER TABLE TRANSMISSIONS ENABLE ROW MOVEMENT which never finished.
We have another instance of the database which does not behave in such a funny way. So we think it might be that in the course of running the application the database got somehow into an inconsistent state.
Does someone have any suggestions what might have gone out of control/into an inconsitent state?
More hints:
There are no locks present on any objects in the database (according to information in v$lock and v$locked_object)
We tried all these statements in SQL Developer and also using SQLPlus (command-line tool).

Sybase ASE remote row insert locking

Im working on an application which access a Sybase ASE 15.0.2 ,where the current code access a remote database
(CIS) to insert a row using a proxy table definition (the destination table is a DOL - DRL table - The PK
row is defined as identity ,and is always growing). The current code performs a select to check if the row
already exists to avoid duplicate data to be inserted.
Since the remote table also have a PK definition on the table, i do understand that the PK verification will
be done again prior to commiting the row.
Im planning to remove the select check since its being effectively done again by the PK verification,
but im concerned about if when receiving a file with many duplicates, the table may suffer
some unecessary contention when the data is tried to be commited.
Its not clear to me if Sybase ASE tries to hold the last row and writes the data prior to check for the
duplicate. Also, if the table is very big, im concerned also about the time it will spend looking the
whole index to find duplicates.
I've found some documentation for SQL anywhere, but not ASE in the following link
http://dcx.sybase.com/1200/en/dbusage/insert-how-transact.html
The best i could find is the following explanation
https://groups.google.com/forum/?fromgroups#!topic/comp.databases.sybase/tHnOqptD7X8
But it doesn't enlighten in details how the row is locked (and if there is any kind of
optimization to write it ahead or at the same time of PK checking)
, and also if it will waste a full PK look if im positively inserting a row which the PK
positively greater than the last row commited
Thanks
Alex
Unlike SqlAnywhere there is no option for ASE to set wait_for_commit. The primary key constraint is checked during the insert and not at the commit time. The problem as I understand from your post I see is if you have a mass insert from a file that may contain duplicates is to load into a temp table , check for duplicates, remove the duplicates and then insert the unique ones. Mass insert are lot faster though it still checks for primary key violations. However there is no cost associated as there is no rolling back. The insert statement is always all or nothing. Even if one row is duplicate the entire insert statement will fail. Check before insert in more of error free approach as opposed to use of constraint to the verification because it is going to fail and rollback is going to again be costly.
Thanks Mike
The link does have a very quick explanation about the insert from the CIS perspective. Its a variable to keep an eye on given that CIS may become a representative time consumer
if its performing data and syntax checking if it will be done again when CIS forwards the insert statement to the target server. I was afraid that CIS could have some influence beyond the network traffic/time over the locking/PK checking
Raju
I do agree that avoiding the PK duplication by checking if the row already exists by running a select and doing in a batch, but im currently looking for a stop gap solution, and that may be to perform the insert command in batches of about 50 rows and leave the
duplicate key check for the PK.
Hopefully the PK check will be done over a join of the 50 newly inserted rows, and thus
avoid to traverse the index for each single row...
Ill try to test this and comment back
Alex

Oracle, 2 procedures avoid deadlock

I have two procedures that I want to run on the same table, one uses the birth date and the other updates the name and last name taken from a third table.
The one that uses the birthday to update the age field runs all over the table, and the one that updates the names and last name only updates the rows that appear on the third table based on a key.
So I launched both and got deadlocked! Is there a way to prioritize any of them? I read about the nowait and skip locked for the update but then, how would I return to the ones skipped?
Hope you can help me on this!!
One possibility is to lock all rows you will update at once. Doing all updates in a single update statment will accomplish this. Or
select whatever from T
where ...
for update;
Another solution is to create what I call a "Gatekeeper" table. Both procedures need to lock the Gatekeeper table in exclusive mode before updating the table in question. The second procedure will block until the first commits but won't deadlock. In 11g you can create a table with no space allocated.
A variation is to insert a row in the Gatekeeper. Then lock only that row with select for update. Then you can use the Gatekeeper in other situations.
I would guess that you got locked because the update for all the rows and the update for a small set of rows accessed rows in different orders.
The former used a full scan and reached Raw A first, then went on to other rows, eventually trying to lock Row B. However, the other query was driven from an index or a join and already had Row B locked, and was off to lock Row A when it found it was already locked.
So, the fix: firstly, having an age column that needs to be constantly modified is a really bad idea. Perhaps it was done to allow indexing of age, but with a correctly written query an index on date of birth will let you find the same records just as quickly. You've broken normalisation rules and ended up coding yourself a deadlocking application. Hopefully you are only updating the rows that need to be updated, not all of them regardless -- I mean, that would just be insane.
The best solution is to get rid of that design flaw.
The not so good solution is to deconflict your queries by running them at different times or by using DBMS_Lock so that only one of them can run at any time.

do we need to truncate a large table before dropping?

it is said that we should always truncate a large table before dropping, it improves performance. Is it true?
IMO in general if you simply want to drop a table then DROP is appropriate. It will release space the same way as TRUNCATE would and it will have the advantage of being atomic (no query will have the opportunity to see the table "empty").
From 10g+, a dropped table won't be deleted immediately however: if there is sufficient space it will be put in the recycle bin. If you truncate a table first, no data will remain in the recycle bin. This may be why you have been told to truncate first (?).
In any case, if you want to bypass the recycle bin you could issue DROP TABLE your_table PURGE and this statement will be atomic.
It entirely depends if you want to be able to roll back if something goes wrong.
Deletion of data records the deletion against the transaction logs of the database until you commit the change.
Truncation removes all the data from the table without recording those logs, so there can be a significant performance improvement in doing this. Just be sure you know what you are doing, as there's no way back.
It may be a good idea in order to reset the high water mark.

Can I detect the version of a table's DDL in Oracle?

In Informix, I can do a select from the systables table, and can investigate its version column to see what numeric version a given table has. This column is incremented with every DDL statement that affects the given table. This means I have the ability to see whether a table's structure has changed since the last time I connected.
Is there a similar way to do this in Oracle?
Not really. The Oracle DBA/ALL/USER_OBJECTS view has a LAST_DDL_TIME column, but it is affected by operations other than structure changes.
You can do that (and more) with a DDL trigger that keeps track of changes to tables. There's an interesting article with example here.
If you really want to do so, you'd have to use Oracle's auditing functions to audit the changes. It could be as simple as:
AUDIT ALTER TABLE WHENEVER SUCCESSFUL on [schema I care about];
That would at least capture the successfuly changes, ignoring drops and creates. Unfortunately, unwinding the stack of the table's historical strucuture by mining the audit trail is left as an exercise to the reader in Oracle, or to licensing the Change Management Pack.
You could also roll your own auditing by writing system-event triggers which are invoked on DDL statements. You'd end up having to write your own SQL parser if you really wantedto see what was changing.

Resources