Is it possible to set a trigger to set the new row's value to be the result of a select statement? My current syntax is as follows and it's just not working:
CREATE TRIGGER "BRAND_NEW_TRIGGER"
BEFORE INSERT ON my_table
FOR EACH ROW
BEGIN
:NEW.column_one := (SELECT details_col FROM other_table WHERE property_id = :NEW.property_id);
END;
/
I've fudged the details of the code above to protect my company's security, I know the code above doesn't make too much sense but there is a valid reason I need to pull and organise the data this way.
You can do a select into
select ot.details_col
into :new.column_one
from other_table ot
where ot.property_id = :new.property_id;
Of course, I'd strongly question the data model if this makes sense. That strongly implies that you've got a data model in need of some normalization.
Related
I hava a user table , which is quite simple
create table user (
user_id int primary key,
user_name varchar2(20)
)
And I build a couples of relative tables assocaite with user table and each table has a user_id , user_name.
So here comes a question, I happend misinput a data with wrong name, but then I just linked to this wrong record with all relative tables. If I want correct the user table and same time synchronized user_name in all relative tables.How I do in simple way? Plus I didn't set any constraint with these tables.
Edit:
So let me put that more clearly. I can query all user from user table, and then I just create a select in the jsp page. And this selector got two field user_id, user_name. This is how we call it a selector. First I recorded a man with '01','tam' maybe, and I just recorded another row in salary with 'tam','$1300'. This was all wrong cause name was 'tom'. It's easily to change user or salary , but in our system, there are over 40 tables linked to user. I know it's a bad idea but it is designed that way
by our dba and it already worked a long time.
We'll start by making the problem explicit. The data model violates Third Normal Form: instead of relying on user_id to reference user_name every table dependent on the user table has the attribute. The consequence of this is that correcting a mistake in user_name means propagating that change to every table.
Further more it seems that this application lacks a mechanism for correcting errors, or rather propagating the correction to all the impacted tables. So, what to do?
Dynamic SQL and the data dictionary to the rescue:
declare
l_id user.user_id%type := 1234;
l_old_name user.user_name%type := 'Tam';
l_new_name user.user_name%type := 'Tom';
begin
for rec in ( select table_name from user_tab_cols where column_name = 'USER_ID'
intersect
select table_name from user_tab_cols where column_name = 'USER_NAME'
)
loop
execute immediate 'update '|| rec.table_name ||
' set user_name = :1 where user_id = :2 and user_name = :3'
using l_new_name, l_id, l_old_name;
commit;
end loop;
end;
/
No guarantees about performance, because it depends on the data and indexing for each table.
"it already worked a long time"
Which makes me wonder how many data inconsistencies are contained in your system that you don't know about? Maybe your DBA needs to brush up on their data modelling skills.
I need to write a trigger in Oracle PL/SQL (11g) before inserting each row that checks if a row exists: if it doesn't exists creates a new row, if it does exists updates the existing record.
Which is the best way to do that?
Thanks, Gianluca
What you want to do is a MERGE INTO:
MERGE INTO myTable t
USING (SELECT 'Smith' AS Name, 1 AS Id FROM DUAL) data -- put your data in here
ON (t.Id = data.Id) -- pk or other matching criteria
WHEN MATCHED
THEN
UPDATE SET t.name = data.name
WHEN NOT MATCHED
THEN
INSERT (Id, Name)
VALUES (data.Id, data.Name);
Buildiing a trigger is possible but no allowed/intended. You should't do this.
You'd try to abbort the insert and do something else. This is not a good idea, because of many thing: hidden logic in db, stupid-clients which are doing wrong things..
You can abort with an error, but it doesn't sound like you idea this way.
If you want to do it you could change to update. Never insert anything and implement a trigger before update, which checks if the row exists:
CREATE OR REPLACE TRIGGER myTableTrigger
BEFORE UPDATE
ON myTable
FOR EACH ROW
BEGIN
-- If row doesn't exist. Insert one before the update..
END;
Alternatively you could go the long way and build some views:
https://dba.stackexchange.com/questions/24047/oracle-abort-within-a-before-insert-trigger-without-throwing-an-exception
In Oracle, I have a requirement where in I need to insert records from Source to Target and then update the PROCESSED_DATE field of source once the target has been updated.
1 way is to use cursors and loop row by row to achieve the same.
Is there any other way to do the same in an efficient way?
No need for a cursor. Assuming you want to transfer those rows that have not yet been transfered (identified by a NULL value in processed_date).
insert into target_table (col1, col2, col3)
select col1, col2, col3
from source_table
where processed_date is null;
update source_table
set processed_date = current_timestamp
where processed_date is null;
commit;
To avoid updating rows that were inserted during the runtime of the INSERT or between the INSERT and the update, start the transaction in serializable mode.
Before you run the INSERT, start the transaction using the following statement:
set transaction isolation level SERIALIZABLE;
For more details see the manual:
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10005.htm#i2067247
http://docs.oracle.com/cd/E11882_01/server.112/e25789/consist.htm#BABCJIDI
A trigger should work. The target table can have a trigger that on update, updates the source table's column with the processed date.
My preferred solution in this sort of instance is to use a PL/SQL array along with batch DML, e.g.:
DECLARE
CURSOR c IS SELECT * FROM tSource;
TYPE tarrt IS TABLE OF c%ROWTYPE INDEX BY BINARY_INTEGER;
tarr tarrt;
BEGIN
OPEN c;
FETCH c BULK COLLECT INTO tarr;
CLOSE c;
FORALL i IN 1..tarr.COUNT
INSERT INTO tTarget VALUES tarr(i);
FORALL i IN 1..tarr.COUNT
UPDATE tSource SET processed_date = SYSDATE
WHERE tSource.id = tarr(i).id;
END;
The above code is an example only and makes some assumptions about the structure of your tables.
It first queries the source table, and will only insert and update those records - which means you don't need to worry about other sessions concurrently inserting more records into the source table while this is running.
It can also be easily changed to process the rows in batches (using the fetch LIMIT clause and a loop) rather than all-at-once like I have here.
Got another answer from some one else. Thought that solution seems much more reasonable than enabling isolation level as all my new records will have the PROCESSED_DATE as null (30 rows which inserted with in the time the records got inserted in Target table)
Also the PROCESSED_DATE = NULL rows can be updated only by using my job. No other user can update these records at any point of time.
declare
date_stamp date;
begin
select sysdate
into date_stamp
from dual;
update source set processed_date = date_stamp
where procedded_date is null;
Insert into target
select * from source
where processed_date = date_stamp;
commit;
end;
/
Let me know any further thoughts on this. Thanks a lot for all your help on this.
CanĀ I create a view based on a nextval sequence?
I create a view like this:
create view seq_agents_nextval
as
select seq_agents.nextval from dual;
From the Oracle documentation I read that it doesn't work like that. Are there any other tricks or tips to create a view with output like that?
your best bet will be a udf in the spirit of the following code:
CREATE OR REPLACE FUNCTION my_nv RETURN INTEGER AS
l_rv NUMBER;
BEGIN
SELECT seq_agents.nextval
INTO l_rv
FROM DUAL
;
RETURN l_rv;
END;
CREATE OR REPLACE VIEW seq_agents_nextval AS
SELECT my_nv
FROM DUAL
;
otherwise you may query system views to get at least an approximate answer
CREATE OR REPLACE VIEW seq_agents_nextval AS
SELECT last_number + increment_by nv
FROM ALL_SEQUENCES
WHERE sequence_owner = '<the_proper_schema_name>'
AND sequence_name = 'SEQ_AGENTS'
;
but this value will be of limited use as it may be larger than the actual value by as much as the number of cached values for the sequence times the sequence increment and doesn't take into account the max value and carry-over behavior (the latter two aspects could be fixed).
keep in mind that an arbitrary number of new sequence values may be issued between you querying your view and using the returned value.
hope this helps & regards
I am familiar with Sybase which allows queries with format: IF EXISTS () THEN ... ELSE ... END IF (or very close). This a powerful statement that allows: "if exists, then update, else insert".
I am writing queries for DB2 on IBM iSeries box. I have seen the CASE keyword, but I cannot make it work. I always receive the error: "Keyword CASE not expected."
Sample:
IF EXISTS ( SELECT * FROM MYTABLE WHERE KEY = xxx )
THEN UPDATE MYTABLE SET VALUE = zzz WHERE KEY = xxx
ELSE INSERT INTO MYTABLE (KEY, VALUE) VALUES (xxx, zzz)
END IF
Is there a way to do this against DB2 on IBM iSeries? Currently, I run two queries. First a select, then my Java code decides to update/insert. I would rather write a single query as my server is located far away (across the Pacific).
+UPDATE+
DB2 for i, as of version 7.1, now has a MERGE statement which does what you are looking for.
>>-MERGE INTO--+-table-name-+--+--------------------+----------->
'-view-name--' '-correlation-clause-'
>--USING--table-reference--ON--search-condition----------------->
.------------------------------------------------------------------------.
V |
>----WHEN--+-----+--MATCHED--+----------------+--THEN--+-update-operation-+-+----->
'-NOT-' '-AND--condition-' +-delete-operation-+
+-insert-operation-+
'-signal-statement-'
See IBM i 7.1 InfoCenter DB2 MERGE statement reference page
DB/2 on the AS/400 does not have a conditional INSERT / UPDATE statement.
You could drop the SELECT statement by executing an INSERT directly and if it fails execute the UPDATE statement. Flip the order of the statements if your data is more likely to UPDATE than INSERT.
A faster option would be to create a temporary table in QTEMP, INSERT all of the records into the temporary table and then execute a bulk UPDATE ... WHERE EXISTS and INSERT ... WHERE NOT EXISTS at the end to merge all of the records into the final table. The advantage of this method is that you can wrap all of the statements in a batch to minimize round trip communication.
You can perform control-flow logic (IF...THEN...ELSE) in an SQL stored procedure. Here's sample SQL source code:
-- Warning! Untested code ahead.
CREATE PROCEDURE libname.UPSERT_MYTABLE (
IN THEKEY DECIMAL(9,0),
IN NEWVALUE CHAR(10) )
LANGUAGE SQL
MODIFIES SQL DATA
BEGIN
DECLARE FOUND CHAR(1);
-- Set FOUND to 'Y' if the key is found, 'N' if not.
-- (Perhaps there's a more direct way to do it.)
SET FOUND = 'N';
SELECT 'Y' INTO FOUND
FROM SYSIBM.SYSDUMMY1
WHERE EXISTS
(SELECT * FROM MYTABLE WHERE KEY = THEKEY);
IF FOUND = 'Y' THEN
UPDATE MYTABLE
SET VALUE = NEWVALUE
WHERE KEY = THEKEY;
ELSE
INSERT INTO MYTABLE
(KEY, VALUE)
VALUES
(THEKEY, NEWVALUE);
END IF;
END;
Once you create the stored procedure, you call it like you would any other stored procedure on this platform:
CALL UPSERT_MYTABLE( xxx, zzz );
This slightly over complex piece of SQL procedure will solve your problem:
IBM Technote
If you want to do a mass update from another table then have a look at the MERGE statement which is an incredibly powerful statement which lets you insert, update or delete depending on the values from another table.
IBM DB2 Syntax