Validation checks in PL/SQL - oracle

As per my project requirement, I need to store all validation check queries in one table and validate all records of another table and update each record with its validation status.
For example, I have two tables called EMP and VALIDATIONS
Validation table has two columns as below:
------------------- --------------
Validation_desc Validation_sql
------------------ --------------
EID_IS_NULL related SQL should be here
SAL_HIGH related SQL should be here
EMPtable has normal columns like eid,ename,sal,dept,is_valid,val_desc.
I should write PL/SQL code which will fetch all validation sql's from VALIDATIONS table and check each record of EMP table and validate them. If first record got success with all validations which are available in VALIDATIONS table then EMP table IS_VALID column should be updated with 1 and Validation_desc should be null for that particular record. If second record got failed with 2 checks then that record's IS_VALID column should be updated with 0 and Validation_desc should be updated with those Validation_descwith comma separated, like wise it should check all validations for all records of EMP table.
I have tried below code to fetch all details from both the tables but not able to write logic for validations.
CREATE PROCEDURE P_VALIDATION
as
TYPE REC_TYPE IS RECORD( Validation_desc VARCHAR2(4000),
Validation_sql VARCHAR2(4000));
TYPE VAL_CHECK_TYPE IS TABLE OF REC_TYPE;
LV_VAL_CHECK VAL_CHECK_TYPE;
CURSOR CUR_FEED_DATA IS SELECT * FROM EMP;
LV_FEED_DATA EMP%ROWTYPE;
BEGIN
SELECT Validation_desc, Validation_sql
BULK COLLECT INTO LV_VAL_CHECK FROM VALIDATIONS;
OPEN CUR_FEED_DATA;
LOOP
FETCH CUR_FEED_DATA INTO LV_FEED_DATA;
EXIT WHEN CUR_FEED_DATA%NOTFOUND;
FOR I IN LV_VAL_CHECK.FIRST .. LV_VAL_CHECK.LAST LOOP
----SOME VALIDATIONS LOGIC HERE--
END LOOP;
END LOOP;
CLOSE CUR_FEED_DATA;
END;

There is no single type of validation. Validations can be is null, is not null, is true/false, returns rows/no rows etc. There are a couple of ways to tackle this
write your own assertion package with a procedure for each type of validation. In your table you'd store the type of validation and the expression to be evaluated. This is quite a bit of work.
leverage an open source testing framework like utPLSQL and modify that a bit to suit your needs. All kind of validations have already been implemented there. If you decide to go this route, note that there are major differences between version 2 and 3

First off pay attention to #KoenLostrie opening sentence: There is no single type of validation. This basically says no single process will solve the problem. But using the database's built in validations should be your first line of attack. Both of the example validation simply cease to be necessary with simple predefined constraints:
Use a constraint to define ... — a rule that
restricts the values in a database. Oracle Database lets you create
six types of constraints ...
For the table you outlined constraints can handle validations you outlined and a couple extras:
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Validation | Column | Constraint Type | Description |
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Eid_is_Null | eid | Primary Key | Guarantees eid is not null and is unique in table |
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Sal_High | salary | Check (salary <= 50000) | Guarantees salary column is not greater than 50,000 |
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Dept_OK | dept | not null | Ensures column is present |
+ +----------------------------------+-----------------------------------------------------+
| | | Foreign key to Departments table | Guarantees value exists in reference table |
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Name_Ok | ename | not null | Ensures column is present |
+-------------+--------+----------------------------------+-----------------------------------------------------+
For the listed above, and any other constraint added, you do not need a query to validate - the database manager simply will not allow an invalid row to exist. Of course now your code needs to handle the resulting exceptions.
Unfortunately constraints cannot handle every validation so you will still need validation routines to record those. However, you should not create a comma separated list (not only do they violate 1st normal form they are always more trouble than they are worth - how do you update the list when the error is resolved?) Instead create a new table - emp_violations as:
create table emp_violations (
emp_vio_id integer generated always as identity
, emp_id integer -- or type of emp_id
, vol_id integer -- or type of vio_id (the pk of violations table)
, date_created date
, date_resolved date
, constraint emp_violations_pk
primary key (emp_violation_id)
, constraint emp_violations_2_emp_fk
foreign key (emp_id)
references emp(mp_id)
, constraint emp_violations_2_violations_fk
foreign key (vio_id)
references violations(vio_id)
);
With this it is easy to see what non-constraint violations have, or currently exist, and when they were resolved. Also remove the columns Validation_desc (no longer needed) and is_valid (derivable from unresolved emp_violations requiring no additional column maintenance).
If you absolutely must get is_valid and a comma separated list of violations then create a view with the LISTAGG function.

Related

View Performance

I have requirement of performing some calculation on a column of a table with large date set ( 300 GB). and return that value.
Basically I need to create a View on that table. Table has data of 21 years and It is partitioned on date column (Daily). We can not put date condition on View's query and User will put filter on runtime while execution of the view.
For example:
Create view v_view as
select * from table;
Noe I want to query View like
Select * v_view where ts_date between '1-Jan-19' and '1-Jan-20'
How Internally Oracle execute above statement? Will it execute view query first and then put date filter on that?
If so will there not be performance issue ? and how to resolve this?
oracle first generates the view and then applies the filter. you can create a function that input may inserted by user. the function results a create query and if yo run the query then the view will be created. just run:
create or replace function fnc_x(where_condition in varchar2)
return varchar2
as
begin
return ' CREATE OR REPLACE VIEW sup_orders AS
SELECT suppliers.supplier_id, orders.quantity, orders.price
FROM suppliers
INNER JOIN orders
ON suppliers.supplier_id = orders.supplier_id
'||where_condition||' ';
end fnc_x;
this function should be run. input the function is a string like this:
''WHERE suppliers.supplier_name = Microsoft''
then you should run a block like this to run the function's result:
cl scr
set SERVEROUTPUT ON
declare
szSql varchar2(3000);
crte_vw varchar2(3000);
begin
szSql := 'select fnc_x(''WHERE suppliers.supplier_name = Microsoft'') from dual';
dbms_output.put_line(szSql);
execute immediate szSql into crte_vw; -- generate 'create view' command that is depended on user's where_condition
dbms_output.put_line(crte_vw);
execute immediate crte_vw ; -- create the view
end;
In this manner, you just need received where_condition from user.
Oracle can "push" the predicates inside simple views and can then use those predicates to enable partition pruning for optimal performance. You almost never need to worry about what Oracle will run first - it will figure out the optimal order for you. Oracle does not need to mindlessly build the first step of a query, and then send all of the results to the second step. The below sample schema and queries demonstrate how only the minimal amount of partitions are used when a view on a partitioned table is queried.
--drop table table1;
--Create a daily-partitioned table.
create table table1(id number, ts_date date)
partition by range(ts_date)
interval (numtodsinterval(1, 'day'))
(
partition p1 values less than (date '2000-01-01')
);
--Insert 1000 values, each in a separate day and partition.
insert into table1
select level, date '2000-01-01' + level
from dual
connect by level <= 1000;
--Create a simple view on the partitioned table.
create or replace view v_view as select * from table1;
The following explain plan shows "Pstart" and "Pstop" set to 3 and 4, which means that only 2 of the many partitions are used for this query.
--Generate an explain plan for a simple query on the view.
explain plan for
select * from v_view where ts_date between date '2000-01-02' and date '2000-01-03';
--Show the explain plan.
select * from table(dbms_xplan.display(format => 'basic +partition'));
Plan hash value: 434062308
-----------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION RANGE ITERATOR| | 3 | 4 |
| 2 | TABLE ACCESS FULL | TABLE1 | 3 | 4 |
-----------------------------------------------------------
However, partition pruning and predicate pushing do not always work when we may think they should. One thing we can do to help the optimizer is to use date literals instead of strings that look like dates. For example, replace
'1-Jan-19' with date '2019-01-01'. When we use ANSI date literals, there is no ambiguity and Oracle is more likely to use partition pruning.

Inserting Row Number based on existing value in the table

I have a requirement that I need to insert row number in a table based on value already present in the table. For example, the max row_nbr record in the current table is something like this:
+----------+----------+------------+---------+
| FST_NAME | LST_NAME | STATE_CODE | ROW_NBR |
+----------+----------+------------+---------+
| John | Doe | 13 | 123 |
+----------+----------+------------+---------+
Now, I need to insert more records, with given FST_NAME and LST_NAME values. ROW_NBR needs to be generated while inserting the data into table with values auto-incrementing from 123.
I can't use a sequence, as my loading process is not the only process that inserts data into this table. And I can't use a cursor as well, as due to high volume of data the TEMP space gets filled up quickly. And I'm inserting data as given below:
insert into final_table
( fst_name,lst_name,state_code)
(select * from staging_table
where state_code=13);
Any ideas how to implement this?
It sounds like other processes are finding the current maximum row_nbr value and incrementing it as they do single-row inserts in a cursor loop.
You could do something functionally similar, either finding the maximum in advance and incrementing it (if you're already running this in a PL/SQL block):
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, variable_holding_maximum + rownum
from staging_table st
where st.state_code=13;
or by querying the table as part of the query, which doesn't need PL/SQL:
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, (select max(row_nbr) from final_table) + rownum
from staging_table st
where st.state_code=13;
db<>fiddle
But this isn't a good solution because it doesn't prevent clashes from different processes and sessions trying to insert at the same time; but neither would the cursor loop approach, unless it is catching unique constraint errors and re-attempting with a new value, perhaps.
It would be better to use a sequence, which would be an auto-increment column but you said you can't change the table structure; and you need to let the other processes continue to work without modification. You can still do that with a sequence and trigger approach, having the trigger always set the row_nbr value form the sequence, regardless of whether the insert statement supplied a value.
If you create a sequence that starts from the current maximum, with something like:
create sequence final_seq start with <current max + 1>
or without manually finding it:
declare
start_with pls_integer;
begin
select nvl(max(row_nbr), 0) + 1 into start_with from final_table;
execute immediate 'create sequence final_seq start with ' || start_with;
end;
/
then your trigger could just be:
create trigger final_trig
before insert on final_table
for each row
begin
:new.row_nbr := final_seq.nextval;
end;
/
Then your insert ... select statement doesn't need to supply or even think about the row_nbr value, so you can leave it as you have it now (except I'd avoid select * even in that construct, and list the staging table columns explicitly); and any existing inserts that do supply the row_nbr don't need to be modified and the value they supply will just be overwritten from the sequence.
db<>fiddle showing inserts with and withouth row_nbr specified.

Access "Page Items to Submit" values inside a trigger in oracle apex

I want to access extra page item value from a trigger which isn't in the related data table. For an example I have a table like below,
Employee
----------------------
empId | fName | lName
----------------------
1 | John | Doe
----------------------
2 | Jane | Doe
Apex form items will be like,
P500_EMPID, P500_FNAME, P500_LNAME
I have an extra page item called P500_SOMEIDSwhich is a multi select list. I want to access those selected values inside After Insert trigger of the Employee table. I tried to add this item into "Page Items to Submit". But I do not know how to access it inside that trigger. Is it possible..? and How..?
In the page process that handles your table update (that will be a process of type "Form - Automatic Row Processing (DML)", under "Settings" there is a attribute "Return Primary Key(s) after Insert". If that is set to "On", then the insert statement will return the value of the inserted row into the page item that is defined as your primary key.
Example:
Create a form on emp
Make P1_EMPNO the primary key page item
Suppose you create a new row, and that empno value is 1000. Then after the automatic row processing page process is run, the value of P1_EMPNO is set to 1000.
If you want to insert rows into another table referencing the newly created empno then you can create a page process (that executes after the row processing) of type pl/sql code like this:
BEGIN
INSERT INTO some_other_table (empno) VALUES (:P1_EMPNO);
END;
Using triggers for business functionality should be avoided whenever possible.
Instead of a database trigger, use a stored procedure which will do the same job. Pass any number of parameters you want (including the P500_SOMEIDS item).

Automatically inserting data into a table using a procedure

I would like to ask you a rather easy question but I cannot get my head around it as I am a beginner in SQL.
My task is: Enter initial data into BankStats2 by inserting rows into BankStats2 that
contain the branch names together with how many loans are in the Loan
table for that branch name.
desc BankStats2
Name Null? Type
----------------------------------------- -------- ----------------------------
BRANCHNAME NOT NULL VARCHAR2(20)
NUMBEROFLOANS NUMBER(38)
desc Loan
Name Null? Type
----------------------------------------- -------- ----------------------------
CUSTOMERNAME CHAR(20)
BRANCHNAME CHAR(20)
AMOUNT NUMBER(38)
LOANNUMBER NOT NULL NUMBER(38)
select branchName,count(customerName) from Loan group by branchName;
BRANCHNAME COUNT(CUSTOMERNAME)
-------------------- -------------------
Yorkshire 3
RoyalBank 1
Midlands 3
Basically, I would like to insert this information in the BankStats2 table and the way I thought of doing it is by creating a procedure which I will show below.
CREATE OR REPLACE PROCEDURE PopulateBankStats AS
CURSOR someLoanRows IS
SELECT branchName,COUNT(customerName) FROM loan GROUP BY branchName;
aBranchNameRow loan.branchName%TYPE;
numberOfLoans INT;
BEGIN
OPEN someLoanRows;
LOOP
FETCH someLoanRows INTO aBranchNameRow, numberOfLoans;
INSERT INTO BankStats2 VALUES (aBranchNameRow,numberOfLoans);
EXIT WHEN someLoanRows%NOTFOUND;
END LOOP;
CLOSE someLoanRows;
END;
/
But executing it give me the following error:
ERROR at line 1:
ORA-00001: unique constraint (N0757934.SYS_C0034405) violated
ORA-06512: at "N0757934.POPULATEBANKSTATS", line 10
ORA-06512: at line 1
Any help would be greatly appreciated. Thank you for your time!
This insert fails: INSERT INTO BankStats2 VALUES (aBranchNameRow,numberOfLoans); due to the error: ORA-00001: unique constraint (N0757934.SYS_C0034405) violated
This means that there is an unique constraint created on some of the columns of the table BankStats2.
In order to find which column has unique constraint, run this query:
select * from USER_IND_COLUMNS where index_name = 'SYS_C0034405';
Your procedure is trying to insert a record with a value of this column which already is existing in the table.
Have a look on the INSERT statement.
What your procedure is doing is exactly this insert statement:
INSERT INTO BankStats2 (BRANCHNAME,NUMBEROFLOANS)
SELECT branchName,COUNT(customerName) FROM loan GROUP BY branchName;
It is always preferable to use SQL statement (if possible) instead of the PL/SQL cursor loop logik - search Tom Kyte's "row by row - slow by slow" for an explantion.
Even if you want to use a procedure at all cost - use this INSERT in the preocedure.
Your exception means that you try to insert a value of the column BRANCHNAME that already exists in the table BankStats2.
This could be by an accident or a systematic problem.
If it is an accident, simple clean the data, i.e. DELETE the row(s) with the corresponding keys from the BankStats2 table.
This query returns the values existing in both tables
select BRANCHNAME from BankStats2
intersect
select branchName FROM loan;
If you want to systematically avoid inserting the duplicated row, add this logik in your INSERT statement:
INSERT INTO BankStats2 (BRANCHNAME,NUMBEROFLOANS)
SELECT branchName,COUNT(customerName)
FROM loan
WHERE branchName IS NOT NULL
and branchName NOT IN (select BRANCHNAME from BankStats2)
GROUP BY branchName;
Note that the SELECT excludes the row with the value that already exists in the target table - using NOT IN (subquery).
Note also that I'm approaching your next possible problem. The column BRANCHNAME is non nullable in BankStats2, but is nullable (i.e. may contain NULL) in loan, so you would fail to insert the row with NULL to the table BankStats2. Therefore I exclude those rows with the branchName IS NOT NULL predicate.
If you want to process the existing keys with an UPDATE logik, check the MERGE statement.

2 triggers launching sametime

I have two tables
user_salary
-------------------------
| user_id | salary_p |
-------------------------
| 1 | 100 |
| 2 | 200 |
-------------------------
user_p_salary
------------------------
| user_id | salary_c |
-------------------------
| 1 | 100 |
| 2 | 200 |
-------------------------
user_salary is used via UI, and it has following trigger:
create or replace trigger t$user_salary_aiu
after insert or update of salary_p
on user_salary
for each row
begin
update user_p_salary t
set t.salary_c = :new.salary_p,
where t.user_id = :new.user_id
end t$user_salary_aiu;
user_p_salary gets data via integration and it has following code:
create or replace trigger t$user_p_salary_aiu
after insert or update of salary_c
on user_p_salary
for each row
begin
update user_salary t
set t.salary_p = :new.salary_c,
where t.user_id = :new.user_id
end t$user_p_salary_aiu;
Now the problem is that if one of the table gets the data then it executes its trigger and updates data in another table. However, a trigger on another table executes as well..which is like cycle of trigger.
The only way is to use execute immediate 'alter triggername disable' but this doesn't seem to be working in triggers at all. Any ideas?
Thanks in advance :-)
The ORA-04091 error is exactly what should happen here. In simple terms, you can't do what you're trying to do. Think about it - you update table #1, then the trigger on table #1 updates table #2, whose trigger updates table #1, whose trigger updates table #2, over and over and over. It's a trigger loop, and Oracle doesn't allow that to happen. The rule is that a row trigger cannot access a table which has already been changed (or "mutated") in the same transaction. There are techniques (notably, compound triggers) which let you "work around" this, but the best approach would be to re-work the design to eliminate this issue. Sorry to be the bearer of bad news. Best of luck.
Why not only update the salary if it is not equal to the new value e.g.
create or replace trigger t$user_salary_aiu
after insert or update of salary_p
on user_salary
for each row
begin
update user_p_salary t
set t.salary_c = :new.salary_p
where t.user_id = :new.user_id
and ( t.salary_c <> :new.salary_p
or (t.salary_c is null and :new.salary_p is not null)
or (t.salary_c is not null and :new.salary_p is null) );
end t$user_salary_aiu;
create or replace trigger t$user_p_salary_aiu
after insert or update of salary_c
on user_p_salary
for each row
begin
update user_salary t
set t.salary_p = :new.salary_c
where t.user_id = :new.user_id
and ( t.salary_p <> :new.salary_c;
or (t.salary_p is null and :new.salary_c is not null)
or (t.salary_p is not null and :new.salary_c is null) );
end t$user_p_salary_aiu;
Note: Despite the wording of the documentation the dml_event_clause update of column appears to mean the trigger will fire if the column is included in triggering UPDATE statement i.e. if the column is updated, even if it is updated to the same value that it was.
How about you add a column to each table called BY_TRIGGER.
For an update or insert outside your trigger, you simply do not specify this column. However, when updating or inserting from within your trigger, you pass a value of 1.
Also, in each trigegr, you check if :new.BY_TRIGGER is 1, and if it is, you skip the insert/update to the other table.

Resources