Friends,
This Ask Tom thread which I found via another SO question, mentions Table and Transactional API's and I'm trying to understand the difference between them.
A Table API (TAPI) is where there is no access to the underlying tables and there are "getters" & "setters" to obtain information.
For example to select an address I would:
the_address := get_address(address_id);
Instead of:
select the_address
from some_table
where identifier = address_id
And then to change the address I would invoke another TAPI which takes care of the change:
...
change_address(address_id, new_address);
...
A Transactional API (XAPI) is again where there is no direct access to modify the information in the table but I can select from it? (this is where my understanding is kind of hazy)
To select an address I would:
select the_address
from some_table
where identifier = address_id
and then to change it I would call
...
change_address(address_id, new_address);
...
So the only difference I can see between a TAPI and a XAPI is the method in which a record is retrieved from the database, i.e. a Select Versus a PL/SQL call?
Is that it? or have I missed the point entirely?
Let's start with the Table API. This is the practice of mediating access to tables through a PL/SQL API. So, we have a package per table, which should be generated from the data dictionary. The package presents a standard set of procedures for issuing DML against the table and some functions for retrieving data.
By comparison a Transactional API represents a Unit Of Work. It doesn't expose any information about the underlying database objects at all. Transactional APIs offer better encapsulation, and a cleaner interface.
The contrast is like this. Consider these business rules for creating a new Department:
The new Department must have a Name and Location
The new Department must have a manager, who must be an existing Employee
Other existing Employees may be transferred to the new Department
New employees may be assigned to the new Department
The new Department must have at least two Employees assigned (including the manager)
Using Table APIs the transaction might look something like this:
DECLARE
dno pls_integer;
emp_count pls_integer;
BEGIN
dept_utils.insert_one_rec(:new_name, :new_loc, dno);
emp_utils.update_one_rec(:new_mgr_no ,p_job=>'MGR’ ,p_deptno=>dno);
emp_utils.update_multi_recs(:transfer_emp_array, p_deptno=>dno);
FOR idx IN :new_hires_array.FIRST..:new_hires_array.LAST LOOP
:new_hires_array(idx).deptno := dno;
END LOOP;
emp_utils.insert_multi_recs(:new_hires_array);
emp_count := emp_utils.get_count(p_deptno=>dno);
IF emp_count < 2 THEN
raise_application_error(-20000, ‘Not enough employees’);
END IF;
END;
/
Whereas with a Transactional API it is much simpler:
DECLARE
dno subtype_pkg.deptno;
BEGIN
dept_txns.create_new_dept(:new_name
, :new_loc
, :new_mgr_no
, :transfer_emps_array
, :new_hires_array
, dno);
END;
/
So why the difference in retrieving data? Because the Transactional API approach discourages generic get() functions in order to avoid the mindless use of inefficient SELECT statements.
For example, if you just want the salary and commission for an Employee, querying this ...
select sal, comm
into l_sal, l_comm
from emp
where empno = p_eno;
... is better than executing this ...
l_emprec := emp_utils.get_whole_row(p_eno);
...especially if the Employee record has LOB columns.
It is also more efficient than:
l_sal := emp_utils.get_sal(p_eno);
l_comm := emp_utils.get_comm(p_eno);
... if each of those getters executes a separate SELECT statement. Which is not unknown: it's a bad OO practice that leads to horrible database performance.
The proponents of Table APIs argue for them on the basis that they shield the developer from needing to think about SQL. The people who deprecate them dislike Table APIs for the very same reason. Even the best Table APIs tend to encourage RBAR processing. If we write our own SQL each time we're more likely to choose a set-based approach.
Using Transactional APis doesn't necessarily rule out the use of get_resultset() functions. There is still a lot of value in a querying API. But it's more likely to be built out of views and functions implementing joins than SELECTs on individual tables.
Incidentally, I think building Transactional APIs on top of Table APIs is not a good idea: we still have siloed SQL statements instead of carefully written joins.
As an illustration, here are two different implementations of a transactional API to update the salary of every Employee in a Region (Region being a large scale section of the organisation; Departments are assigned to Regions).
The first version has no pure SQL just Table API calls, I don't think this is a straw man: it uses the sort of functionality I have seen in Table API packages (although some use dynamic SQL rather than named SET_XXX() procedures).
create or replace procedure adjust_sal_by_region
(p_region in dept.region%type
, p_sal_adjustment in number )
as
emps_rc sys_refcursor;
emp_rec emp%rowtype;
depts_rc sys_refcursor;
dept_rec dept%rowtype;
begin
depts_rc := dept_utils.get_depts_by_region(p_region);
<< depts >>
loop
fetch depts_rc into dept_rec;
exit when depts_rc%notfound;
emps_rc := emp_utils.get_emps_by_dept(dept_rec.deptno);
<< emps >>
loop
fetch emps_rc into emp_rec;
exit when emps_rc%notfound;
emp_rec.sal := emp_rec.sal * p_sal_adjustment;
emp_utils.set_sal(emp_rec.empno, emp_rec.sal);
end loop emps;
end loop depts;
end adjust_sal_by_region;
/
The equivalent implementation in SQL:
create or replace procedure adjust_sal_by_region
(p_region in dept.region%type
, p_sal_adjustment in number )
as
begin
update emp e
set e.sal = e.sal * p_sal_adjustment
where e.deptno in ( select d.deptno
from dept d
where d.region = p_region );
end adjust_sal_by_region;
/
This is much nicer than the nested cursor loops and single row update of the previous version. This is because in SQL it is a cinch to write the join we need to select Employees by Region. It is a lot harder using a Table API, because Region is not a key of Employees.
To be fair, if we have a Table API which supports dynamic SQL, things are better but still not ideal:
create or replace procedure adjust_sal_by_region
(p_region in dept.region%type
, p_sal_adjustment in number )
as
emps_rc sys_refcursor;
emp_rec emp%rowtype;
begin
emps_rc := emp_utils.get_all_emps(
p_where_clause=>'deptno in ( select d.deptno
from dept d where d.region = '||p_region||' )' );
<< emps >>
loop
fetch emps_rc into emp_rec;
exit when emps_rc%notfound;
emp_rec.sal := emp_rec.sal * p_sal_adjustment;
emp_utils.set_sal(emp_rec.empno, emp_rec.sal);
end loop emps;
end adjust_sal_by_region;
/
last word
Having said all that, there are scenarios where Table APIs can be useful, situations when we only want to interact with single tables in fairly standard ways. An obvious case might be producing or consuming data feeds from other systems e.g. ETL.
If you want to investigate the use of Table APIs, the best place to start is Steven Feuerstein's Quest CodeGen Utility (formerly QNXO). This is about as good as TAPI generators get, and it's free.
A table API (TAPI) is a simple API that provides the basic CRUD operations for a table. For example, if we have a tableR MYTABLE (id INTEGER PRIMARY KEY, text VACHAR2(30)), then the TAPI would be something like:
package mytable_tapi is
procedure create_rec (p_id integer, p_text varchar2);
procedure update_rec (p_id integer, p_text varchar2);
procedure delete_rec (p_id integer);
function get_rec (p_id integer) returns mytable%rowtype;
end;
When you use TAPIs, every table has a TAPI, and every insert, update and delete goes through the TAPI.
A transaction API (XAPI) is an API that works at the transaction level rather than at the individual CRUD level (though in some cases this will be the same thing). For example, a XAPI to handle a banking transactions might look something like this:
package banking_xapi is
procedure make_transfer (p_from_account integer, p_to_account integer,
p_amount number);
... -- other XAPI procs
end;
The make_transfer procedure may not perform a single insert, update or delete. It may do something like this:
procedure make_transfer (p_from_account integer, p_to_account integer,
p_amount number)
is
begin
insert into transfer_details (from_account, to_account, amount)
values (p_from_account, p_to_account, p_amount);
update accounts set balance = balance-p_amount
where account_no = p_from_account;
update accounts set balance = balance+p_amount
where account_no = p_to_account;
end;
i.e. it performs an entire transaction, which may consist of 1 or several DML statements.
A TAPI proponent would say that this is coded wrong and should contain no DML, but instead call TAPI code like:
procedure make_transfer (p_from_account integer, p_to_account integer,
p_amount number)
is
begin
transactions_tapi.insert_rec (p_from_account, p_to_account, p_amount);
accounts_tapi.update_rec (p_from_account, -p_amount);
accounts_tapi.update_rec (p_to_account, p_amount);
end;
Others (like Tom Kyte and myself) would see this as overkill, adding no real value.
So you can have XAPIs alone (Tom Kyte's way), or XAPIs that call TAPIs (Steve Feuerstein's way). But some systems have TAPIs alone, which is really poor - i.e. they leave it to writers of the user interface to string together the necessary TAPI calls to make up a transaction. See my blog for the implications of that approach.
Related
I am trying to prevent inserts of records into a table for scheduling. If the start date of the class is between the start and end date of a previous record, and that record is the same location as the new record, then it should not be allowed.
I wrote the following trigger, which compiles, but of course mutates, and therefore has issues. I looked into compound triggers to handle this, but either it can't be done, or my understanding is bad, because I couldn't get that to work either. I would have assumed for a compound trigger that I'd want to do these things on before statement, but I only got errors.
I also considered after insert/update, but doesn't that apply after it's already inserted? It feels like that wouldn't be right...plus, same issue with mutation I believe.
The trigger I wrote is:
CREATE OR REPLACE TRIGGER PREVENT_INSERTS
before insert or update on tbl_classes
DECLARE
v_count number;
v_start TBL_CLASS_SCHED.start_date%type;
v_end TBL_CLASS_SCHED.end_date%type;
v_half TBL_CLASS_SCHED.day_is_half%type;
BEGIN
select start_date, end_date, day_is_half
into v_start, v_end, v_half
from tbl_classes
where class_id = :NEW.CLASS_ID
and location_id = :NEW.location_id;
select count(*)
into v_count
from TBL_CLASS_SCHED
where :NEW.START_DATE >= (select start_date
from TBL_CLASS_SCHED
where class_id = :NEW.CLASS_ID
and location_id = :NEW.location_id)
and :NEW.START_DATE <= (select end_date
from TBL_CLASS_SCHED
where class_id = :NEW.CLASS_ID
and location_id = :NEW.location_id);
if (v_count = 2) THEN
RAISE_APPLICATION_ERROR(-20001,'You cannot schedule more than 2 classes that are a half day at the same location');
end if;
if (v_count = 1 and :NEW.day_is_half = 1) THEN
if (v_half != 1) THEN
RAISE_APPLICATION_ERROR(-20001,'You cannot schedule a class during another class''s time period of the same type at the same location');
end if;
end if;
EXCEPTION
WHEN NO_DATA_FOUND THEN
null;
END;
end PREVENT_INSERTS ;
Perhaps it can't be done with a trigger, and I need to do it multiple ways? For now I've done it using the same logic before doing an insert or update directly, but I'd like to put it as a constraint/trigger so that it will always apply (and so I can learn about it).
There are two things you'll need to fix.
Mutating occurs because you are trying to do a SELECT in the row level part of a trigger. Check out COMPOUND triggers as a way to mitigate this. Basically you capture info at row level, and the process that info at the after statement level. Some examples of that in my video here https://youtu.be/EFj0wTfiJTw
Even with the mutating issue resolved, there is a fundamental flaw in the logic here (trigger or otherwise) due to concurrency. All you need is (say) three or four people all using this code at the same time. All of them will get "Yes, your count checks are ok" because none of them can see each others uncommitted data. Thus they all get told they can proceed and when they finally commit, you'll have multiple rows stored hence breaking the rule your tirgger (or wherever your code is run) was trying to enforce. You'll need to look an appropriate row so that you can controlling concurrent access to the table. For an UPDATE, that is easy because this means there is already some row(s) for the location/class pairing. For an INSERT, you'll need to ensure an appropriate unique constraint is in place on a parent table somewhere. Hard to say without seeing the entire model
In principle a compound trigger could be this one:
CREATE OR REPLACE TYPE CLASS_REC AS OBJECT(
CLASS_ID INTEGER,
LOCATION_ID INTEGER,
START_DATE DATE,
END_DATE DATE,
DAY_IS_HALF INTEGER
);
CREATE OR REPLACE TYPE CLASS_TYPE AS TABLE OF CLASS_REC;
CREATE OR REPLACE TRIGGER UIC_CLASSES
FOR INSERT OR UPDATE ON TBL_CLASSES
COMPOUND TRIGGER
classes CLASS_TYPE;
v_count NUMBER;
v_half TBL_CLASS_SCHED.DAY_IS_HALF%TYPE;
BEFORE STATEMENT IS
BEGIN
classes := CLASS_TYPE();
END BEFORE STATEMENT;
BEFORE EACH ROW IS
BEGIN
classes.EXTEND;
classes(classes.LAST) := CLASS_REC(:NEW.CLASS_ID, :NEW.LOCATION_ID, :NEW.START_DATE, :NEW.END_DATE, :NEW.DAY_IS_HALF);
END BEFORE EACH ROW;
AFTER STATEMENT IS
BEGIN
FOR i IN classes.FIRST..classes.LAST LOOP
SELECT COUNT(*), v_half
INTO v_count, v_half
FROM TBL_CLASSES
WHERE CLASS_ID = classes(i).CLASS_ID
AND LOCATION_ID = classes(i).LOCATION_ID
AND classes(i).START_DATE BETWEEN START_DATE AND END_DATE
GROUP BY v_half;
IF v_count = 2 THEN
RAISE_APPLICATION_ERROR(-20001,'You cannot schedule more than 2 classes that are a half day at the same location');
END IF;
IF v_count = 1 AND classes(i).DAY_IS_HALF = 1 THEN
IF v_half != 1 THEN
RAISE_APPLICATION_ERROR(-20001,'You cannot schedule a class during another class''s time period of the same type at the same location');
end if;
end if;
END LOOP;
END AFTER STATEMENT;
END;
/
But as stated by #Connor McDonald, there are several design flaws - even in a single user environment.
A user may update the DAY_IS_HALF, I don't think the procedure covers all variants. Or a user updates END_DATE and by that, the new time intersects with existing classes.
Better avoid direct insert into the table and create a PL/SQL stored procedure in which you perform all the validations you need and then, if none of the validations fail, perform the insert. And grant execute on that procedure to the applications and do not grant applications insert on that table. That is a way to have all the data-related business rules in the database and make sure that no data that violates those rules in entered into the tables, no matter by what client application, for any client application will call a stored procedure to perform insert or update and will not perform DML directly on the table.
I think the main problem is the ambiguity of the role of the table TBL_CLASS_SCHED and the lack of clear definition of the DAY_IS_HALF column (morning, afternoon ?).
If the objective is to avoid 2 reservations of the same location at the same half day, the easiest solution is to use TBL_CLASS_SCHED to enforce the constraint with (start_date, location_id) being the primary key, morning reservation having start_date truncated at 00:00 and afternoon reservation having start_date set at 12:00, and you don't need end_date in that table, since each row represents an half day reservation.
The complete solution will then need a BEFORE trigger on TBL_CLASSES for UPDATE and INSERT events to make sure start_date and end_date being clipped to match the 00:00 and 12:00 rule and an AFTER trigger on INSERT, UPDATE and DELETE where you will calculate all the half-day rows to maintain in TBL_CLASS_SCHED. (So 1 full day reservation in TBL_CLASSES, will generate 2 rows in TBL_CLASS_SCHED). Since you maintain TBL_CLASS_SCHED from triggers set on TBL_CLASSES without any need to SELECT on the later, you don't have to worry about mutating problem, and because you will constraint start_date to be either at 00:00 or 12:00, the primary key constraint will do the job for you. You may even add a unique index on (start_date, classe_id) in TBL_CLASS_SCHED to avoid a classe to be scheduled at 2 locations at the same time, if you need to.
I am trying to get my last 5 employees (ones with lowest salary) and raise their salary by 5%;
I am using a varray to store their id's but i don't know how to use those ids in a update statement (something like update employees \ set salary = salary * 1.05 \ where id_employee in varray)
here's what i have for now:
DECLARE
TYPE tip_cod IS VARRAY(20) OF NUMBER;
coduri tip_cod;
BEGIN
SELECT employee_id
BULK COLLECT INTO coduri
FROM (
SELECT employee_id
from employees
where commission_pct IS NULL
order by salary asc
)
WHERE ROWNUM < 6;
-- after i store their ids in coduri i want to update their salary
FOR i IN 1 .. coduri.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(coduri(i));
END LOOP;
END;
/
If you are practicing the use of loops to do things one at a time (not a good approach for this task!) you can replace your calls to put_line with insert statements, something like
...
update employees set salary = 1.05 * salary where employee_id = coduri(i);
...
The beauty of PL/SQL is that you can embed such plain-SQL statements directly within PL/SQL code, no need for preparation of any kind.
After you are done with the updates, you will need to commit for the changes to be committed - usually after the procedure, not within it.
Alternatively, if you want a single update (with an in condition), you will need to define the varray table at the schema level, not within the anonymous block (or procedure). This is because the update statement is a SQL statement, which can't "see" locally defined data types. Then, in the update statement you will need to use the table operator to unwind its members. Something like this:
create type tip_cod is varray(20) of number;
/
DECLARE
coduri tip_cod;
BEGIN
SELECT employee_id
BULK COLLECT INTO coduri
FROM (
SELECT employee_id
from employees
where commission_pct IS NULL
order by salary asc
)
WHERE ROWNUM < 6;
update employees set salary = 1.05 * salary
where employee_id in (select * from table(coduri));
END;
/
commit;
Notice how the varray type is defined on its own, then it is used in the PL/SQL block. Also don't forget the commit at the end.
When you work with collection types, there is also the member of predicate, as in employee_id member of coduri. Alas, this only works with locally-defined data types; since the varray type must be declared at the schema level (so that it can be used in a SQL statement within the PL/SQL code), you can't use member of and you must unwind the array explicitly, with the table operator.
There id much more to collections (Oracle term for array). There are 3 types:
Varrays
Associative Arrays
Nested Tables
If you want to understand collections you must understand all 3. (imho: Of the 3 Varrays are the most limited).
Mathguy presents 1 option, "casting" the array as a table, via the TABLE(...) function. I'll present another: Nested Table combined with Bulk Collect/Forall combination to accomplish he update.
declare
type employee_id_att is table of hr.employees.employee_id%type;
employee_id_array employee_id_att;
begin
select employee_id
bulk collect
into employee_id_array
from hr.employees
where commission_pct is null
order by salary
fetch first 5 rows only;
forall emp_indx in 1 .. employee_id_array.count
update hr.employees
set salary = 1.05 * salary
where employee_id = employee_id_array(emp_indx);
end ;
/
Take Away: There is much, much more to collections than defining a LOOP. Spend some time with the documentation and write tests and examine the results. But the important thing when you do not understand write some code. It will probably fail, that is good, so write something else. Do not be afraid of errors/ exceptions, in development they are friend. And if there is something you cannot understand then post a specific question. Be prepared to show several failed attempts; that will give the community an idea of your thinking and whether you are on the correct path or not.
Pl/SQL:
Intent: My intent was to access employee tuple object defied as cursor below by using key as the employee_id.
Problem: I created a cursor - *l_employees_cur* and want to create type table as below type *l_employees_t*, as below but the compiler is complaining saying that PLS-00315 implementation restriction unsupported table index type.
CURSOR l_employees_cur
IS
SELECT employee_id,manager_id,first_name,last_name FROM employees;
type l_employees_t
IS
TABLE OF l_employees_cur%rowtype INDEX BY employees.employee_id%TYPE;
The definition of employees.employee_id is:
EMPLOYEE_ID NUMBER(6) NOT NULL
why can't I do this ? or Am I doint something wrong.
From the Oracle Documenation:
Associative Arrays
An associative array (formerly called PL/SQL table or index-by table) is a set of key-value pairs. Each key is a unique index, used to locate the associated value with the syntax variable_name(index).
The data type of index can be either a string type or PLS_INTEGER. Indexes are stored in sort order, not creation order. For string types, sort order is determined by the initialization parameters NLS_SORT and NLS_COMP.
I think that your mistake is the declaration of the plsql table.
Why don't you try the next one:
type l_employees_t
IS
TABLE OF l_employees_cur%rowtype INDEX BY pls_integer;
I also have a question for you:
What is the meaning of EMPLOYEE_ID NOT NULL NUMBER(6) in your code above?
Greetings
Carlos
Storing and Retreiving SQL Query Output in a PL/SQL Collection
The example in the OP looks a lot like Oracle's new sample HR data schema. (For those old-timers who know, the successor to the SCOTT-TIGER data model). This solution was developed on an Oracle 11g R2 instance.
The Demo Table Design - EMP
Demonstration Objectives
This example will show how to create a PL/SQL collection from an object TYPE definition. The complex data type is derived from the following cursor definition:
CURSOR l_employees_cur IS
SELECT emp.empno as EMPLOYEE_ID, emp.mgr as MANAGER_ID, emp.ename as LAST_NAME
FROM EMP;
After loading the cursor contents into an index-by collection variable, the last half of the stored procedure contains an optional step which loops back through the collection and displays the data either through DBMS_OUTPUT or an INSERT DML operation on another table.
Stored Procedure Example Source Code
This is the stored procedure used to query the demonstration table, EMP.
create or replace procedure zz_proc_employee is
CURSOR l_employees_cur IS
SELECT emp.empno as EMPLOYEE_ID, emp.mgr as MANAGER_ID, emp.ename as LAST_NAME
FROM EMP;
TYPE employees_tbl_type IS TABLE OF l_employees_cur%ROWTYPE INDEX BY PLS_INTEGER;
employees_rec_var l_employees_cur%ROWTYPE;
employees_tbl_var employees_tbl_type;
v_output_string varchar2(80);
c_output_template constant varchar2(80):=
'Employee: <<EMP>>; Manager: <<MGR>>; Employee Name: <<ENAME>>';
idx integer;
outloop integer;
BEGIN
idx:= 1;
OPEN l_employees_cur;
FETCH l_employees_cur INTO employees_rec_var;
WHILE l_employees_cur%FOUND LOOP
employees_tbl_var(idx):= employees_rec_var;
FETCH l_employees_cur INTO employees_rec_var;
idx:= idx + 1;
END LOOP;
CLOSE l_employees_cur;
-- OPTIONAL (below) Output Loop for Displaying The Array Contents
-- At this point, employees_tbl_var can be handed off or returned
-- for additional processing.
FOR outloop IN 1 .. idx LOOP
-- Build the output string:
v_output_string:= replace(c_output_template, '<<EMP>>',
to_char(employees_tbl_var(outloop).employee_id));
v_output_string:= replace(v_output_string, '<<MGR>>',
to_char(employees_tbl_var(outloop).manager_id));
v_output_string:= replace(v_output_string, '<<ENAME>>',
employees_tbl_var(outloop).last_name);
-- dbms_output.put_line(v_output_string);
INSERT INTO zz_output(output_string, output_ts)
VALUES(v_output_string, sysdate);
COMMIT;
END LOOP;
END zz_proc_employee;
I commented out the dbms_output call due to problems with the configuration of my server beyond my control. The alternate insert command to a output table is a quick way of visually verifying that the data from the EMP table found its way successfully into the declared collection variable.
Results and Discussion of the Solution
Here is my output after calling the procedure and querying my output table:
While the actual purpose behind the access to this table isn't clear in the very terse detail of the OP, I assumed that the first approach was an attempt to understand the use of collections and custom data types for efficient data extraction and handling from structures such as PL/SQL cursors.
The portion of this example procedure is very reusable, and the initial steps represent a working way of making and loading PL/SQL collections. If you notice, even if your own version of this EMP table is different, the only place that requires redefinition is the cursor itself.
Working with types, arrays, nested tables and other collection types will actually simplify work in the long run because of their dynamic nature.
I've been experimenting trigger function in oracle with various constraint, recently someone recommends me to using materialized view instead of trigger on the following condition which i think is quite a wise choice to do so. But for learning purpose, i would like to know how does trigger function works.
create a trigger to check on a specify constraint base on monthly basis.
table rent
|ID|Member|book|
----------------------
1 | John |fairytale|2-jun-12|
2 | Peter |friction|4-jun-12|
3 | John |comic|12-jun-12|
4 | Peter |magazine|20-jun-12|
5 | Peter |magazine|20-jul-12|
6 | Peter |magazine|20-jul-12|
constraint : member are only allow to borrow 2 books monthly.
Code contributed by #HiltoN which i don't quite understand:
create or replace trigger tr_rent
before insert on rent
for each row
declare
v_count number;
begin
select count(id)
into v_count
from rent
where member = :new.member;
if v_count > 2 then
raise_application_error (-20001, 'Limit reached');
end if;
end;
In general, that trigger does not work.
In general, a row-level trigger on table X cannot query table X. So, in your case, a row-level trigger on RENT is generally not allowed to query the RENT table-- doing so would throw a mutating trigger exception. If you want to guarantee that your application will only ever insert 1 row at a time using an INSERT ... VALUES statement, you won't hit a mutating trigger error but that is generally not an appropriate restriction. It is also not appropriate in a multi-user environment-- if there are two transactions running at about the same time both checking out a book to the same user, this trigger will potentially allow both transactions to succeed.
The proper place to add this sort of check is almost certainly in the stored procedure that creates the RENT record. That stored procedure should check how many rentals the member has over the current month and error out if that is more than the limit. Something like
CREATE OR REPLACE PROCEDURE rent_book( p_member IN rent.member%type,
p_book IN rent.book%type )
AS
l_max_rentals_per_month constant number := 2;
type rental_nt is table of rent.rend_id%type;
l_rentals_this_month rental_nt;
BEGIN
SELECT rent_id
BULK COLLECT INTO l_rentals_this_month
FROM rent
WHERE member = p_member
AND trunc(rental_date,'MM') = trunc(sysdate, 'MM')
FOR UPDATE;
IF( l_rentals_this_month.count > l_max_rentals_per_month )
THEN
RAISE_APPLICATION_ERROR( -20001, 'Rental limit exceeded' );
ELSE
INSERT INTO rent( rent_id, member, book, rental_date )
VALUES( rent_id_seq.nextval, p_member, p_book, sysdate );
END IF;
END;
If you really wanted to enforce something like this using triggers, the solution would get much more complicated. If you don't care about efficiency, you could create a statement-level trigger
create or replace trigger tr_rent
after insert on rent
declare
v_count number;
begin
select count(id)
into v_count
from (select member, count(*)
from rent
where trunc(rental_date,'MM') = trunc(sysdate,'MM')
group by member
having count(*) > 2);
if v_count >= 1 then
raise_application_error (-20001, 'At least one person has exceeded their rental limit');
end if;
end;
This works but it requires (at least) that you do the validation for every member every time. That is pretty inefficient when you have a large number of members. You could reduce the workload by substantially increasing complexity. If you
Create a package which declares a package global variable that is a collection of rent.member%type.
Create a before statement trigger that initializes this collection.
Create a row-level trigger that adds the :new.member to this collection
Create an after statement trigger that is similar to the one above but that has an additional condition that the member is in the collection you're maintaining.
This "three-trigger solution" adds a substantial amount of complexity to the system particularly where the appropriate solution is not to use a trigger in the first place.
I agree with Justin, your trigger wouldn't work for a number of reasons. A materialized view or stored procedure solution would get you there. I suggest the very best solution to this problem would be a simple unique index:
create unique index rent_user_limit on rent (member, trunc(rental_date, 'month'));
I need to set a constraint that the user is unable to enter any records after he/she has entered 5 records in a single month. Would it be advisable that I write a trigger or procedure for that? Else is that any other ways that I can setup the constraint?
Instead of writing a trigger i have opt to write a procedure for the constraint but how do i check if the procedure is working?
Below is the procedure:
CREATE OR REPLACE PROCEDURE InsertReadingCheck
(
newReadNo In Int,
newReadValue In Int,
newReaderID In Int,
newMeterID In Int
)
AS
varRowCount Int;
BEGIN
Select Count(*) INTO varRowCount
From Reading
WHERE ReaderID = newReaderID
AND Trunc(ReadDate,'mm') = Trunc(Sysdate,'mm');
IF (varRowCount >= 5) THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('*************************************************');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE(' You attempting to enter more than 5 Records ');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('*************************************************');
ROLLBACK;
RETURN;
END;
ELSIF (varRowCount < 5) THEN
BEGIN
INSERT INTO Reading
VALUES(seqReadNo.NextVal, sysdate, newReadValue,
newReaderID, newMeterID);
COMMIT;
END;
END IF;
END;
Anyone can help me look through
This is the sort of thing that you should avoid putting in a trigger. Especially the ROLLBACK and the COMMIT. This seems extremely dangerous (and I'm not even sure whether it's possible). You might have other transactions that you wish to commit that you rollback or vice versa.
Also, by putting this in a trigger you are going to get the following error:
ORA-04091: table XXXX is mutating, trigger/function may not see it
There are ways round this but they're excessive and involve doing something funky in order to get round Oracle's insistence that you do the correct thing.
This is the perfect opportunity to use a stored procedure to insert data into your table. You can check the number of current records prior to doing the insert meaning that there is no need to do a ROLLBACK.
It depends upon your application, if insert is already present in your application many times then trigger is better option.
This is a behavior constraint. Its a matter of opinion but I would err on the side of keeping this kind of business logic OUT of your database. I would instead keep track of who added what records in the records table, and on what day/times. You can have a SP to get this information, but then your code behind should handle whether or not the user can see certain links (or functions) based on the data that's returned. Whether that means keeping the user from accessing the page(s) where they insert records, or give them read only views is up to you.
One declarative way you could solve this problem that would obey all concurrency rules is to use a separate table to keep track of number of inserts per user per month:
create table inserts_check (
ReaderID integer not null,
month date not null,
number_of_inserts integer constraint max_number_of_inserts check (number_of_inserts <= 5),
primary key (ReaderID, month)
);
Then create a trigger on the table (or all tables) for which inserts should be capped at 5:
create trigger after insert on <table>
for each row
begin
MERGE INTO inserts_check t
USING (select 5 as ReaderID, trunc(sysdate, 'MM') as month, 1 as number_of_inserts from dual) s
ON (t.ReaderID = s.ReaderID and t.month = s.month)
WHEN MATCHED THEN UPDATE SET t.number_of_inserts = t.number_of_inserts + 1
WHEN NOT MATCHED THEN INSERT (ReaderID, month, number_of_inserts)
VALUES (s.ReaderID, s.month, s.number_of_inserts);
end;
Once the user has made 5 inserts, the constraint max_number_of_inserts will fail.