Ensure validity of multiple hierarchies in Oracle table - oracle

I'm storing multiple hierarchies within an Oracle table (standard model of parent, child, root columns) and was interested in finding out what constraints I should consider to make sure the hierarchy remains correct and valid.
I'd like to ensure that for each row the child, parent and root columns all make sense (eg. the same root can not be defined for two different hierarchies, if a child exists so does it's parent, etc.)
I know from working with linked tables that were defined with no constraints (!!) inconsistent/incomplete data always creeps in one way or another no matter how carefully the application tries to avoid it.
Now considering triggers should generally be avoided for all but the simple cases and I can't see how to code a constraint for this, is a stored procedure the only real way to go (presumably with a serializable transaction mode set)?

You can not define foreign key referring the same table of the constraint in the CREATE TABLE statement, but you could do it with an ALTER TABLE statement.
create table XXX (
key number primary key,
parent number
)
/
alter table XXX add constraint XXX_FK foreign key (parent) references XXX
/

Related

Make an Oracle foreign key constraint referencing USER_SEQUENCES(SEQUENCE_NAME)?

I want to create a table with a column that references the name of a sequence I've also created. Ideally, I'd like to have a foreign key constraint that enforces this. I've tried
create table testtable (
sequence_name varchar2(128),
constraint testtableconstr
foreign key (sequence_name)
references user_sequences (sequence_name)
on delete set null
);
but I'm getting a SQL Error: ORA-01031: insufficient privileges. I suspect either this just isn't possible, or I need to add something like on update cascade. What, if anything, can I do to enforce this constraint when I insert rows into this table?
I assume you're trying to build some sort of deployment management system to keep track of your schema objects including sequences.
To do what you ask, you might explore one of the following options:
Run a report after each deployment that compares the values in your table vs. the data dictionary view, and lists any discrepancies.
Create a DDL trigger which does the insert automatically whenever a sequence is created.
Add a trigger to the table which does a query on the sequences view and raises an exception if not found.
I'm somewhat confused at what you are trying to achieve here - a sequence (effectively) only has a single value, the next number to be allocated, not all the values that have been previously allocated.
If you simply want to ensure that an attribute in the relation is populated from the sequence, then a trigger would be the right approach.

Oracle Index - full table scan/lock

Found this here:
In general, consider creating an index on a column in any of the following situations:
A referential integrity constraint exists on the indexed column or
columns. The index is a means to avoid a full table lock that would
otherwise be required if you update the parent table primary key,
merge into the parent table, or delete from the parent table.
I don't understand why a full table lock would occurr in such situation. I would've thought that if I tried to delete/update the primary key in the parent table that a full table scan would be performed on the child table.
Where does the lock come from?
Have a look at this Tom Kyte blog entry. In it, he refers to the Oracle documentation, where this explanation is offered:
Prevents a full table lock on the child table. Instead, the database acquires a row lock on the index.
Removes the need for a full table scan of the child table. As an illustration, assume that a user removes the record for department 10 from the departments table. If employees.department_id is not indexed, then the database must scan employees to see if any employees exist in department 10.
In the first scenario, if the column is not indexed, the entire table must be locked because Oracle does not know which rows must be updated in the child table. With an index, Oracle can identify the rows in question and just lock them. Without the full table lock, it would be possible to modify the parent and have another session modify the child to something that violated the constraint.

How disable checking data integration in create statement (reference to non-existing table)

Let assume that I have an empty database and I want to create two tables. There is a relationship between them. For example, one of attributes (call them b) in R is a foreign key. The definition SHOULD (database is empty, I have not executed any statement, yet) look:
create table R(
a type primary key,
b type references S(b)
);
create table S(
b type primary key
);
If I try run script with statements as above, I get error (according to line: b type references S(b)) , because S doesn't exist - that is normal and natural. It is possible to disable checking existing of tables, coresponding to foreign keys?
I know I can change order of statements. Or create tables without constraints, and add them later.
Why I'm asking. Let assume that we have many tables, with many relationships between them. Ordering tables manually will consume a lot of time, when we will want prepare kind of 'backup script'.
I know that it is possible to disable constraint as below:
alter table table_name disable constraint_name;
But that works for example with insert statement. There is a solution proper to create statement?

How to improve deletion times in Oracle for a self-referencing table

In our Oracle 11g database we have a table, that has a primary key I_Node (int) and also a column called I_Parent_Node (int) that references back to another record in the same table. The root node has I_Parent_Node = null. In this way we form a tree structure of nodes, leaves, branches, whatever you want to call them.
Frequently we need to delete an entire branch of nodes at once, meaning a node and all of its children. At times this is many, many records, say 50,000 or more. Since a cascade delete is not allowed on a self-referencing table, we are forced to delete one by one starting with the leaves and working our way back up the tree. We have experienced hours-long delete times.
We are considering doing a "marking for deletion" technique, where a separate program would clean out the nodes marked for deletion during off-peak hours, but I am interested in whether a database design change or some other Oracle construct could help out here. I am not trained in Oracle aside from what I've learned on the job, and the people who created the database did not have such large quantities in mind. I am open to database design changes since it is not yet a fixed design.
You may want to consider separating the hierarchy structure from the main table. So you main table would just have primary ids (lets call it "ID"), and your hierarchy table would have "ID, ParentID, TreeID". ParentID is that ID's parent node, and TreeID is the highest parent in the tree (level 1).
So, a level 1 node would look like:
ID, ParentID, TreeID
1, [null], 1
A level 2 node would look like:
ID, ParentID, TreeID
2, 1, 1
A level 3 node would look like:
ID, ParentID, TreeID
3, 2, 1
And so on.
You would use Oracle hierarchy queries (Connect by queries) to query or traverse the trees. This table will be very thin (not many columns, these 3 + some modified dates maybe), so updating these relationships should be much faster and scale better than messing with the main table.
You should be able to do this with deferrable constraints and a hierarchical query.
If your foreign key constraint (on I_Parent_Node) is not already deferrable, drop it and recreate it with the keyword "DEFERRABLE".
Here's an example using the EMPLOYEES table from Oracle's examples (I modified the DEPARTMENTS table too so that this would execute, that's really not needed for an example though):
Drop & Recreate your foreign key if it's not currently deferrable:
alter table employees drop constraint emp_manager_fk;
alter table employees add constraint emp_manager_fk foreign key (manager_id) references employees(employee_id) deferrable;
In your transaction, defer your contraints, and delete using a hierarchical query:
set constraints all deferred;
delete
from employees e
where employee_id in (select employee_id
from employees
start with employee_id = 108
connect by prior employee_id = manager_id);
The "108" is the ID of my "parent" record.
I assume you've already done standard tuning - i.e. are the node and parent node ID columns suitable indexed?
(1) One approach to the problem is to use PL/SQL. Bulk collect the IDs to be deleted, using a hierarchical query that returns the leaf rows first, into an array; then do a bulk delete (FORALL) using the array.
(2) Another approach is a soft-delete - mark the rows as "deleted", but never actually delete them. You would need to modify your application (or use Oracle VPD to automatically omit the "deleted" rows from queries). This might work reasonably well if deleting a node is relatively rare; but if you're routinely deleting lots of nodes then this would clutter the table with a lot of old data.

How do I rename a table in Oracle so that all foreign keys, constraints, triggers and sequences are updated and any existing data is preserved?

I need to rename a table in Oracle but I want to be sure that any foreign keys, constraints, triggers and sequences that reference the table are updated to use the new name.
How can I be sure that I have not broken anything?
Note that I want to preserve any existing data that the table contains.
If you
ALTER TABLE old_table_name
RENAME TO new_table_name;
all the existing constraints (foreign key and other constraints) and triggers will reference the newly renamed object. Sequences have no relationship to tables so there will be no impact on the sequences (though if you mean that you are referencing the sequence in a trigger on the table, the trigger will continue to reference the same sequence after the rename). Any stored procedures that you have written that reference the old table name, however, will need to be updated to reference the new table name.
Now, while the constraints and triggers will continue to work correctly, they will retain their original names. If you have naming conventions for these objects that you want to maintain after the table name, you'd need to do more. For example, if you want a row-level before insert trigger on table FOO to be named TRG_BI_FOO and you rename the table to BAR, you'd need to alter the trigger explicitly to change its name
ALTER TRIGGER trg_bi_foo
RENAME TO trg_bi_bar;
Similarly, you'd need to rename your constraints and indexes
ALTER TABLE bar
RENAME CONSTRAINT pk_foo TO pk_bar;
It depends on what you mean by "any foreign keys, constraints, triggers and sequences that reference the table are updated to use the new name."
Any existing indexes, constraints, and triggers against the table being renamed will automatically reference the new name.
However, any naming conventions used for those objects won't automatically use the updated name. For example, if the primary key for TABLE_NAME is generally named TABLE_NAME_PK, renaming TABLE_NAME to NEW_TABLE_NAME won't automatically rename the primary key constraint to NEW_TABLE_NAME_PK.
What will need to be checked is code - packages, procedures, and functions - which referenced the old table name, as well as any triggers which referenced the old table name. Similarly, views against the old table name will break as well. The view ALL_DEPENDENCIES can help identify which of those objects need to be updated.
ALTER TABLE oldName RENAME TO newName
Will preserve the table's dependencies and data but there can always be a piece of PL/SQL that references the old name which is going to become invalid.

Resources