I have two table,and they are connected by one field : B_ID of table A & id of table B.
I want to use sql to insert data to this two table.
how to write the insert sql ?
1,id in table B is auto-increment.
2,in a stupid way,I can insert data to table B first,and then select the id from table B,then add the id to table A as message_id.
You cannot insert data to multiple tables in one SQL statement. Just insert data first to B table and then table A. You could use RETURNING statement to get ID value and get rid of additional select statement between inserts.
See: https://oracle-base.com/articles/misc/dml-returning-into-clause
Have you heard about AFTER INSERT trigger? I think it is what you are looking for.
Something like this might do what you want:
CREATE OR REPLACE TRIGGER TableB_after_insert
AFTER INSERT
ON TableB
FOR EACH ROW
DECLARE
v_id int;
BEGIN
/*
* 1. Select your id from TableB
* 2. Insert data to TableA
*/
END;
/
Related
I am pulling some records in table A from X table.
Now, I want to select records which are not available in table A but available in table B. But at the same time, I don't want to select records available in both tables.
Moreover,if a column in table A is null but the same column in the record in table B has value, I want to take that too.
Is it possible to do something like this in one statement ?
This is a simple set based operation. Relational databases are very good in that:
CREATE TABLE a (id NUMBER);
CREATE TABLE b (id NUMBER);
INSERT INTO a VALUES (1);
INSERT INTO a VALUES (3);
INSERT INTO b VALUES (2);
INSERT INTO b VALUES (3);
SELECT id FROM b
MINUS
SELECT id FROM a;
ID
2
How do I add data in a table which contains attributes of two different tables which are not linked together?
Table 1 has col1,col2,col3 (assume all are numbers)
Table 2 has col4,col5,col6 (assume all are numbers)
Table 3 has A,B,C,D,E,F (assume all are numbers)
Question is that if any insertion occurs on table 1 and table 2, their data should be loaded in table 3.
I used the normal approach
of
create or replace trigger trig_name
before insert on table1,table2 --> not allowed
for each row
begin
if inserting then
insert into table3 values (:new.col1,:new.col2,:new.col3,:new.col4,:new.col5,:new.col6)
end if;
end;
/
So if I have to make two different triggers for table 1 and table 2, wouldn't that create two rows of data leaving some columns null?
Although creating a view as suggested by #EdStevens is (IMO) a superior answer, what you want can be done. And yes it requires 2 triggers. In Oracle, and all other DBMS that I'm aware of, a trigger can only fire on 1 table.
However, they can be reduced somewhat. There is no need for the "if inserting" test. The trigger is declared as "on insert" therefore the test will always be true. The actual issue it you cannot refer columns from table2 in the table1 trigger and likewise the other way around. What you need is to name the columns you are inserting, actually always a better approach. So:
create or replace trigger table1_bir
before insert on table1
for each row
begin
insert into table3(col1, col2, col3)
values (:new.col1,:new.col2,:new.col3);
end;
/
create or replace trigger table2_bir
before insert on table2
for each row
begin
insert into table3(col4, col5, col6)
values (:new.col4,:new.col5,:new.col6);
end;
/
What I would like to do is;
after I insert some data into table1, I would like some of that data automatically inserted into table2, such as the primary key from table1 inserted into table2 as a foreign key.
Is this done using a trigger.
Not sure where to start looking first.
Cheers
Brian
Yes you can do that through trigger. You can do it like this:
CREATE OR REPLACE TRIGGER my_trigger
before INSERT ON table1
REFERENCING NEW AS NEW
for each row
BEGIN
insert into table2(fk_column,column1) values(:new.pk_column_of_table1,'value1');
END;
you can create a trigger as #vance said and you can use returning into clause if you are populating some of the columns dynamically
INSERT INTO t1 VALUES (t1_seq.nextval, 'FOUR')
RETURNING id INTO l_id;
have a look here
I'm looking for the best way to change a data type of a column in a populated table. Oracle only allows changing of data type in colums with null values.
My solution, so far, is a PLSQL statement which stores the data of the column to be modified in a collection, alters the table and then iterates over the collection, restoring the original data with data type converted.
-- Before: my_table ( id NUMBER, my_value VARCHAR2(255))
-- After: my_table (id NUMBER, my_value NUMBER)
DECLARE
TYPE record_type IS RECORD ( id NUMBER, my_value VARCHAR2(255));
TYPE nested_type IS TABLE OF record_type;
foo nested_type;
BEGIN
SELECT id, my_value BULK COLLECT INTO foo FROM my_table;
UPDATE my_table SET my_value = NULL;
EXECUTE IMMEDIATE 'ALTER TABLE my_table MODIFY my_value NUMBER';
FOR i IN foo.FIRST .. foo.LAST
LOOP
UPDATE my_table
SET = TO_NUMBER(foo(i).my_value)
WHERE my_table.id = foo(i).id;
END LOOP;
END;
/
I'm looking for a more experienced way to do that.
The solution is wrong. The alter table statement does an implicit commit. So the solution has the following problems:
You cannot rollback after alter the alter table statement and if the database crashes after the alter table statement you will loose data
Between the select and the update users can make changes to the data
Instead you should have a look at oracle online redefinition.
Your solution looks a bit dangerous to me. Loading the values into a collection and subsequently deleting them fom the table means that these values are now only available in memory. If something goes wrong they are lost.
The proper procedure is:
Add a column of the correct type to the table.
Copy the values to the new column.
Drop the old column.
Rename the new column to the old columns name.
I have a question regarding a unified insert query against tables with different data
structures (Oracle). Let me elaborate with an example:
tb_customers (
id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3)
)
tb_suppliers (
id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx,
archive_id NUMBER(3)
)
The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key.
My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id.
One solution (that works), is to iterate over all the tables in a stored procedure and do a:
CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE);
UPDATE temp SET archive_id=something;
INSERT INTO ORIGINAL_TABLE (select * from temp);
DROP TABLE temp;
I do not like this solution very much as the DDL commands muck up all restore points.
Does anyone else have any solution?
How about creating a global temporary table for each base table?
create global temporary table tb_customers$ as select * from tb_customers;
create global temporary table tb_suppliers$ as select * from tb_suppliers;
You don't need to create and drop these each time, just leave them as-is.
You're archive process is then a single transaction...
insert into tb_customers$ as select * from tb_customers;
update tb_customers$ set archive_id = :v_new_archive_id;
insert into tb_customers select * from tb_customers$;
insert into tb_suppliers$ as select * from tb_suppliers;
update tb_suppliers$ set archive_id = :v_new_archive_id;
insert into tb_suppliers select * from tb_suppliers$;
commit; -- this will clear the global temporary tables
Hope this helps.
I would suggest not having a single sql statement for all tables and just use and insert.
insert into tb_customers_2
select id, name, 'new_archive_id' from tb_customers;
insert into tb_suppliers_2
select id, name, contact, xxx, xxx, 'new_archive_id' from tb_suppliers;
Or if you really need a single sql statement for all of them at least precreate all the temp tables (as temp tables) and leave them in place for next time. Then just use dynamic sql to refer to the temp table.
insert into ORIGINAL_TABLE_TEMP (SELECT * from ORIGINAL_TABLE);
UPDATE ORIGINAL_TABLE_TEMP SET archive_id=something;
INSERT INTO NEW_TABLE (select * from ORIGINAL_TABLE_TEMP);