customers:
+------------+--------------+
| cid | Name |
+------------+--------------+
| 1 | Bob |
| 2 | John |
| 3 | Jane |
+------------+--------------+
accounts:
+------------+--------------+
| aid | type |
+------------+--------------+
| 1 | Checking |
| 2 | Saving |
| 3 | Checking |
+------------+--------------+
transactions:
+------------+--------------+--------------+--------------+
| tid | cid | aid | type |
+------------+--------------+--------------+--------------+
| 1 | 1 | 1 | Open |
| 2 | 2 | 3 | Open |
| 3 | 1 | 2 | Open |
| 4 | 2 | 3 | Deposit |
+------------+--------------+--------------+--------------+
I am trying to write a trigger that writes to a logs table when a new account is successfully opened.
Right now I have this:
CREATE OR REPLACE TRIGGER acc_opened
BEFORE INSERT ON transactions
FOR EACH ROW
DECLARE
c_name customers.name%TYPE;
BEGIN
IF :new.type = 'Open' THEN
SELECT name into c_name
FROM customers c
WHERE c.cid = :new.cid;
INSERT INTO logs (who, what) VALUES (c_name, 'An account has been opened');
END;
/
The code that I have doesn't work and don't know where to go from here.
The trigger completes, but when it fires, I get this error message:
PLS-00103: Encountered the symbol "END" when expecting one of the following: (begin case declare exit for goto if loop mod null pragma raise return select update while with << continue close current delete fetch lock insert open rollback savepoint set sql execut commit forall merge pipe purge
As with your previous question, if you want to refer to a particular column of the new row of data, you need to use the :new pseudo-record. So, at a minimum,
SELECT cid
INTO c_id
FROM transactions t
WHERE t.aid = aid;
would need to be
SELECT cid
INTO c_id
FROM transactions t
WHERE t.aid = :new.aid;
Beyond that, are you sure that the row exists in the transactions table before the row is inserted into the accounts tale? Assuming that you have normal foreign key constraints, I would generally expect that you would insert a row into the accounts table before inserting the row into the transactions table.
The name transactions also seems pretty odd. If that is really just mapping the customer ID to the account ID, transactions seems like a rather poor name. If that table actually stores transactions, I'm not sure why it would have a customer ID. But if it does store transactions, there must be some other table that maps customers to accounts.
In your updated trigger, you are missing the END IF statement
CREATE OR REPLACE TRIGGER acc_opened
BEFORE INSERT ON transactions
FOR EACH ROW
DECLARE
c_name customers.name%TYPE;
BEGIN
IF :new.type = 'Open'
THEN
SELECT name
into c_name
FROM customers c
WHERE c.cid = :new.cid;
INSERT INTO logs (who, what)
VALUES (c_name, 'An account has been opened');
END IF;
END;
Related
It's difficult to explain the question well in the title.
I am inserting 6 values from (or based on values in) one row.
I also need to insert a value from a second row where:
The values in one column (ID) must be equal
The values in column (CODE) in the main source row must be IN (100,200), whereas the other row must have value of 300 or 400
The value in another column (OBJID) in the secondary row must be the lowest value above that in the primary row.
Source Table looks like:
OBJID | CODE | ENTRY_TIME | INFO | ID | USER
---------------------------------------------
1 | 100 | x timestamp| .... | 10 | X
2 | 100 | y timestamp| .... | 11 | Y
3 | 300 | z timestamp| .... | 10 | F
4 | 100 | h timestamp| .... | 10 | X
5 | 300 | g timestamp| .... | 10 | G
So to provide an example..
In my second table I want to insert OBJID, OBJID2, CODE, ENTRY_TIME, substr(INFO(...)), ID, USER
i.e. from my example a line inserted in the second table would look like:
OBJID | OBJID2 | CODE | ENTRY_TIME | INFO | ID | USER
-----------------------------------------------------------
1 | 3 | 100 | x timestamp| substring | 10 | X
4 | 5 | 100 | h timestamp| substring2| 10 | X
My insert for everything that just comes from one row works fine.
INSERT INTO TABLE2
(ID, OBJID, INFO, USER, ENTRY_TIME)
SELECT ID, OBJID, DECODE(CODE, 100, (SUBSTR(INFO, 12,
LENGTH(INFO)-27)),
600,'CREATE') INFO, USER, ENTRY_TIME
FROM TABLE1
WHERE CODE IN (100,200);
I'm aware that I'll need to use an alias on TABLE1, but I don't know how to get the rest to work, particularly in an efficient way. There are 2 million rows right now, but there will be closer to 20 million once I start using production data.
You could try this:
select primary.* ,
(select min(objid)
from table1 secondary
where primary.objid < secondary.objid
and secondary.code in (300,400)
and primary.id = secondary.id
) objid2
from table1 primary
where primary.code in (100,200);
Ok, I've come up with:
select OBJID,
min(case when code in (300,400) then objid end)
over (partition by id order by objid
range between 1 following and unbounded following
) objid2,
CODE, ENTRY_TIME, INFO, ID, USER1
from table1;
So, you need a insert select the above query with a where objid2 is not null and code in (100,200);
We need to implement a query rewrite with a bind variable because we don't have the option of modifying the web application source code. Example:
BEGIN
SYS.DBMS_ADVANCED_REWRITE.declare_rewrite_equivalence (
name => 'test_rewrite2',
source_stmt => 'select COUNT(*) from ViewX where columnA = :1',
destination_stmt => 'select COUNT(*) from ViewY where columnA = :1',
validate => FALSE,
rewrite_mode => 'recursive');
END;
The above command will result in error because there is a bind variable:
30353. 00000 - "expression not supported for query rewrite"
*Cause: The SELECT clause referenced UID, USER, ROWNUM, SYSDATE,
CURRENT_TIMESTAMP, MAXVALUE, a sequence number, a bind variable,
correlation variable, a set result, a trigger return variable, a
parallel table queue column, collection iterator, a non-deterministic
date format token RR, etc.
*Action: Remove the offending expression or disable the REWRITE option on
the materialized view.
I am reading here that there is a work around but I just cannot find the document anywhere online.
Could you please tell me what the work around is?
You can't specify the bind parameters, but it should already work as you wish. The key is the recursive parameter you passed as mode.
The recursive and general mode will intercept all statements that involve the table (or view), disregarding the filter, and transform them to target the second table (or view), adapting the filter condition from your original statement.
(If you had defined it as TEXT_MATCH, it would have checked the presence of the same filter in the original and target statement in order to trigger the transformation.)
In the example below one can see that even if we don't define any bind condition, the filter id = 2 is applied nervetheless; in other words it is actually transforming the SELECT * FROM A1 where id = 2 into SELECT * FROM A2 where id = 2
set LINESIZE 300
drop table A1;
drop view A2;
drop index A1_IDX;
EXEC SYS.DBMS_ADVANCED_REWRITE.drop_rewrite_equivalence (name => 'test_rewrite');
create table A1 (id number, name varchar2(20));
insert into A1 values(1, 'hello world');
insert into A1 values(2, 'hola mundo');
create index A1_IDX on A1(id);
select * from A1;
ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;
CREATE OR REPLACE VIEW A2 AS
SELECT id,
INITCAP(name) AS name
FROM A1
ORDER BY id desc;
BEGIN
SYS.DBMS_ADVANCED_REWRITE.declare_rewrite_equivalence (
name => 'test_rewrite',
source_stmt => 'SELECT * FROM A1',
destination_stmt => 'SELECT * FROM A2',
validate => FALSE,
rewrite_mode => 'recursive');
END;
/
select * from A1;
ID NAME
---------- --------------------
2 Hola Mundo
1 Hello World
select * from A1 where id = 2;
ID NAME
---------- --------------------
2 Hola Mundo
explain plan for
select * from A1 where id = 2;
select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
Plan hash value: 1034670462
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 2 (0)| 00:00:01 |
| 1 | VIEW | A2 | 1 | 25 | 2 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID | A1 | 1 | 25 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN DESCENDING| A1_IDX | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
---------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("ID"=2)
Note
-----
- dynamic sampling used for this statement (level=2)
- automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold
20 rows selected
As you can see
the engine is transparently applying the transformation and returning the filtered result
on top of that, the transformation on the filter is applied. The filter is correctly "pushed" into the source table, to extract the values from A1. It is not blindly extracting all values from A2 and then applying the filter, so the performance is preserved.
I have a table - let's call it MASTER - with a lot of rows in it. Now, I had to created another table called 'MASTER_DETAILS', which will be populated with data from another system. Suh data will be accessed via DB Link.
MASTER has a FK to MASTER_DETAIL (1 -> 1 Relationship).
I created a SQL to populate the MASTER_DETAILS table:
INSERT INTO MASTER_DETAILS(ID, DETAIL1, DETAILS2, BLAH)
WITH QUERY_FROM_EXTERNAL_SYSTEM AS (
SELECT IDENTIFIER,
FIELD1,
FIELD2,
FIELD3
FROM TABLE#DB_LINK
--- DOZENS OF INNERS AND OUTER JOINS HERE
) SELECT MASTER_DETAILS_SEQ.NEXTVAL,
QES.FIELD1,
QES.FIELD2,
QES.FIELD3
FROM MASTER M
INNER JOIN QUERY_FROM_EXTERNAL_SYSTEM QES ON QES.IDENTIFIER = M.ID
--- DOZENS OF JOINS HERE
Approach above works fine to insert all the values into the MASTER_DETAILS.
Problem is:
In the approach above, I cannot insert the value of MASTER_DETAILS_SEQ.CURRVAL into the MASTER table. So I create all the entries into the DETAILS table but I don't link them to the MASTER table.
Does anyone see a way out to this problem using only a INSERT statement? I wish I could avoid creating a complex script with LOOPS and everything to handle this problem.
Ideally I want to do something like this:
INSERT INTO MASTER_DETAILS(ID, DETAIL1, DETAILS2, BLAH) AND MASTER(MASTER_DETAILS_ID)
WITH QUERY_FROM_EXTERNAL_SYSTEM AS (
SELECT IDENTIFIER,
FIELD1,
FIELD2,
FIELD3
FROM TABLE#DB_LINK
--- DOZENS OF INNERS AND OUTER JOINS HERE
) SELECT MASTER_DETAILS_SEQ.NEXTVAL,
QES.FIELD1,
QES.FIELD2,
QES.FIELD3
FROM MASTER M
INNER JOIN QUERY_FROM_EXTERNAL_SYSTEM QES ON QES.IDENTIFIER = M.ID
--- DOZENS OF JOINS HERE,
SELECT MASTER_DETAILS_SEQ.CURRVAL FROM DUAL;
I know such approach does not work on Oracle - but I am showing this SQL to demonstrate what I want to do.
Thanks.
If there is really a 1-to-1 relationship between the two tables, then they could arguably be a single table. Presumably you have a reason to want to keep them separate. Perhaps the master is a vendor-supplied table you shouldn't touch and the detail is extra data; but then you're changing the master anyway by adding the foreign key field. Or perhaps the detail will be reloaded periodically and you don't want to update the master table; but then you have to update the foreign key field anyway. I'll assume you're required to have a separate table, for whatever reason.
If you put a foreign key on the master table that refers to the primary key on the detail table, you're are restricted to it only ever being a 1-to-1 relationship. If that really is the case then conceptually it shouldn't matter which way the relationship is built - which table has the primary key and which has the foreign key. And if it isn't then your model will break when your detail table (or the remote query) comes back with two rows related to the same master - even if you're sure that won't happen today, will it always be true? The pluralisation of the name master_details suggests that might be expected. Maybe. Having the relationship the other way would prevent that being an issue.
I'm guessing you decided to put the relationship that way round so you can join the tables using the detail's key:
select m.column, md.column
from master m
join master_details md on md.id = m.detail_id
... because you expect that to be the quickest way, since md.id will be indexed (implicitly, as a primary key). But you could achieve the same effect by adding the master ID to the details table as a foreign key:
select m.column, md.column
from master m
join master_details md on md.master_id = m.id
It is good practice to index foreign keys anyway, and as long as you have an index on master_details.master_id then the performance should be the same (more or less, other factors may come in to play but I'd expect this to generally be the case). This would also allow multiple detail records in the future, without needing to modify the schema.
So as a simple example, let's say you have a master table created and populated with some dummy data:
create table master(id number, data varchar2(10),
constraint pk_master primary key (id));
create sequence seq_master start with 42;
insert into master (id, data)
values (seq_master.nextval, 'Foo ' || seq_master.nextval);
insert into master (id, data)
values (seq_master.nextval, 'Foo ' || seq_master.nextval);
insert into master (id, data)
values (seq_master.nextval, 'Foo ' || seq_master.nextval);
select * from master;
ID DATA
---------- ----------
42 Foo 42
43 Foo 43
44 Foo 44
The changes you've proposed might look like this:
create table detail (id number, other_data varchar2(10),
constraint pk_detail primary key(id));
create sequence seq_detail;
alter table master add (detail_id number,
constraint fk_master_detail foreign key (detail_id)
references detail (id));
insert into detail (id, other_data)
select seq_detail.nextval, 'Foo ' || seq_detail.nextval
from master m
-- joins etc
;
... plus the update of the master's foreign key, which is what you're struggling with, so let's do that manually for now:
update master set detail_id = 1 where id = 42;
update master set detail_id = 2 where id = 43;
update master set detail_id = 3 where id = 44;
And then you'd query as:
select m.data, d.other_data
from master m
join detail d on d.id = m.detail_id
where m.id = 42;
DATA OTHER_DATA
---------- ----------
Foo 42 Bar 1
Plan hash value: 2192253142
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 22 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 22 | 2 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| MASTER | 1 | 13 | 1 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN | PK_MASTER | 1 | | 0 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DETAIL | 3 | 27 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | PK_DETAIL | 1 | | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
If you swap the relationship around the changes become:
create table detail (id number, master_id, other_data varchar2(10),
constraint pk_detail primary key(id),
constraint fk_detail_master foreign key (master_id)
references master (id));
create index ix_detail_master_id on detail (master_id);
create sequence seq_detail;
insert into detail (id, master_id, other_data)
select seq_detail.nextval, m.id, 'Bar ' || seq_detail.nextval
from master m
-- joins etc.
;
No update of the master table is needed, and the query becomes:
select m.data, d.other_data
from master m
join detail d on d.master_id = m.id
where m.id = 42;
DATA OTHER_DATA
---------- ----------
Foo 42 Bar 1
Plan hash value: 4273661231
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 19 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 19 | 2 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| MASTER | 1 | 10 | 1 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN | PK_MASTER | 1 | | 0 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DETAIL | 1 | 9 | 1 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | IX_DETAIL_MASTER_ID | 1 | | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
The only real difference in the plan is that you now have a range scan instead of a unique scan; if you're really sure it's 1-to-1 you could make the index unique but there's not much benefit.
SQL Fiddle of this approach.
I have a table like this:
myTable (id, group_id, run_date, table2_id, description)
I also have a index like this:
index myTable_grp_i on myTable (group_id)
I used to run a query like this:
select * from myTable t where t.group_id=3 and t.run_date='20120512';
and it worked fine and everyone was happy.
Until I added another index:
index myTable_tab2_i on myTable (table2_id)
My life became miserable... it's taking almost as 5 times longer to run !!!
execution plan looks the same (with or without the new index):
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 220 | 17019
|* 1 | TABLE ACCESS BY INDEX ROWID| MYTABLE | 1 | 220 | 17019
|* 2 | INDEX RANGE SCAN | MYTABLE_GRP_I | 17056 | | 61
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("T"."RUN_DATE"='20120512')
2 - access("T"."GROUP_ID"=3)
I have almost no hair left on my head, why should another index which is not used, on a column which is not in the where clause make a difference ...
I will update the things I checked:
a. I removed the new index and it run faster
b. I added the new index in 2 more different environments and the same thing happen
c. I changed MYTABLE_GRP_I to be on columns run_date and group_id - this made it run fast as a lightning !!
But still why does it happen ?
Let's say we have the following table structures:
documents docmentStatusHistory status
+---------+ +--------------------+ +----------+
| docId | | docStatusHistoryId | | statusId |
+---------+ +--------------------+ +----------+
| ... | | docId | | ... |
+---------+ | statusId | +----------+
| ... |
+--------------------+
It may be obvious, but it's worth mentioning, that the current status of a document is the last Status History entered.
The system was slowly but surely degrading in performance and I suggested changing the above structure to:
documents docmentStatusHistory status
+--------------+ +--------------------+ +----------+
| docId | | docStatusHistoryId | | statusId |
+--------------+ +--------------------+ +----------+
| currStatusId | | docId | | ... |
| ... | | statusId | +----------+
+--------------+ | ... |
+--------------------+
This way we'd have the current status of a document right where it should be.
Because the way the legacy applications were built I could not change the code on legacy applications to update the current status on the document table.
In this case I had to open an exception to my rule to avoid triggers at all costs, simply because I don't have access to the legacy applications code.
I created a trigger that updates the current status of a document every time a new status is added to the status history, and it works like a charm.
However, in an obscure and rarely used situation there is a need to DELETE the last status history, instead of simply adding a new one. So, I created the following trigger:
create or replace trigger trgD_History
after delete on documentStatusHistory
for each row
currentStatusId number;
begin
select statusId
into currentStatusId
from documentStatusHistory
where docStatusHistoryId = (select max(docStatusHistoryId)
from documentStatusHistory
where docId = :old.docId);
update documentos
set currStatusId = currentStatusId
where docId = :old.docId;
end;
And thats where I got the infamous error ORA-04091.
I understand WHY I'm getting this error, even though I configured the trigger as an AFTER trigger.
The thing is that I can't see a way around this error. I have searched the net for a while and couldn't find anything helpful so far.
In time, we're using Oracle 9i.
The standard workaround to a mutating table error is to create
A package with a collection of keys (i.e. docId's in this case). A temporary table would also work
A before statement trigger that initializes the collection
A row-level trigger that populates the collection with each docId that has changed
An after statement trigger that iterates over the collection and does the actual UPDATE
So something like
CREATE OR REPLACE PACKAGE pkg_document_status
AS
TYPE typ_changed_docids IS TABLE OF documentos.docId%type;
changed_docids typ_changed_docids := new typ_changed_docids ();
<<other methods>>
END;
CREATE OR REPLACE TRIGGER trg_init_collection
BEFORE DELETE ON documentStatusHistory
BEGIN
pkg_document_status.changed_docids.delete();
END;
CREATE OR REPLACE TRIGGER trg_populate_collection
BEFORE DELETE ON documentStatusHistory
FOR EACH ROW
BEGIN
pkg_document_status.changed_docids.extend();
pkg_document_status.changed_docids( pkg_document_status.changed_docids.count() ) := :old.docId;
END;
CREATE OR REPLACE TRIGGER trg_use_collection
AFTER DELETE ON documentStatusHistory
BEGIN
FOR i IN 1 .. pkg_document_status.changed_docids.count()
LOOP
<<fix the current status for pkg_document_status.changed_docids(i) >>
END LOOP;
pkg_document_status.changed_docids.delete();
END;
seems to be a duplicate of this question
check out Tom Kyte's take on that