I need to update exactly same columns of two tables How do update both through single query, currently I am doing like below;
Update Table1 set col1=val1, col2=val2, col3=val3 where Id in=
(select tt.cId from anotherTable at inner join thirdTable tt on at.id = tt.id)
Update Table2 set col1=val1, col2=val2, col3=val3 where Id in=
(select tt.cId from anotherTable at inner join thirdTable tt on at.id = tt.id)
only change is Table name is different.
P.S. Above is the sample query, original query has several lines of code. I have to minimize the text due to column size, I cannot change the existing size of column
How do I perform this at once or in single query in oracle, both tables are exactly same and also update values are same.
Is there anything like this;
update Table1 t1, Table2 t2 set t1.col1 = val1, t2.col1 = val1, t1.col2 = val2, t2.col2 = val2...
I don't think it is possible in Oracle to update two different target tables in a single statement. But, you can do both updates within a single logical transaction:
START TRANSACTION;
UPDATE 1...
UPDATE 2...
COMMIT;
We cannot update two table in simultaneously so in this case, we can use PL SQL
varchar2 data type to hold the value of update as in pl sql varchar is limit is 32,767 bytes.Hold updated value in a variable and use that varibale to update tables.
A sample code.
desc temp;
Name Null? Type
----------- ----- --------------
ID NUMBER
MDATE DATE
ITEM CHAR(3)
DESCRIPTION VARCHAR2(4000)
set SERVEROUTPUT ON
DECLARE
v_text_update VARCHAR2(32767);
v_query1 VARCHAR2(32767);
v_query2 VARCHAR2(32767);
v_query3 VARCHAR2(32767);
BEGIN
v_query1 := 'update temp set description=' || chr(39);
v_text_update := 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
;
v_query2 := chr(39)
|| ' where id=11';
v_query3 := v_query1
|| v_text_update
|| v_query2;
dbms_output.put_line(v_query3);
EXECUTE IMMEDIATE v_query3; -- OR
EXECUTE IMMEDIATE v_query1
|| v_text_update
|| v_query2; ----instead of storing value in v_query3 you can use
concatenation directly.
END;
/
I hope it helps and let me know if I miss something.
Related
So I usually get the Primary Key of a newly inserted record as the following while using a trigger.
insert into table1 (pk1, notes) values (null, "Tester") returning pk1
into v_item;
I am trying to use the same concept but with an insert using a select statement. So for example:
insert into table1 (pk1, notes) select null, description from table2 where pk2 = 2 returning pk1
into v_item;
Note:
1. There is a trigger on table1 which automatically creates a pk1 on insert.
2. I need to use a select insert because of the size of the table that is being inserted into.
3. The insert is basically a copy of the record, so there is only 1 record being inserted at a time.
Let me know if I can provide more information.
I don't believe you can do this with insert/select directly. However, you can do it with PL/SQL and FORALL. Given the constraint about the table size, you'll have to balance memory usage with performance using l_limit. Here's an example...
Given this table with 100 rows:
create table t (
c number generated by default as identity,
c2 number
);
insert into t (c2)
select rownum
from dual
connect by rownum <= 100;
You can do this:
declare
cursor t_cur
is
select c2
from t;
type t_ntt is table of number;
l_c2_vals_in t_ntt;
l_c_vals_out t_ntt;
l_limit number := 10;
begin
open t_cur;
loop
fetch t_cur bulk collect into l_c2_vals_in limit l_limit;
forall i in indices of l_c2_vals_in
insert into t (c2) values (l_c2_vals_in(i))
returning c bulk collect into l_c_vals_out;
-- You have access to the new ids here
dbms_output.put_line(l_c_vals_out.count);
exit when l_c2_vals_in.count < l_limit;
end loop;
close t_cur;
end;
You can't use that mechanism; as shown in the documentation railroad diagram:
the returning clause is only allowed with the values version, not with the subquery version.
I'm interpreting your second restriction (about 'table size') as being about the number of columns you would have to handle, possibly as individual variables, rather than about the number of rows - I don't see how that would be relevant here. There are ways to avoid having lots of per-column local variables though; you could select into a row-type variable first:
declare
v_item number;
v_row table1%rowtype;
begin
...
select null, description
into v_row
from table2 where pk2 = 2;
insert into table1 values v_row returning pk1 into v_item;
dbms_output.put_line(v_item);
...
or with a loop, which might make things look more complicated than necessary if you really only ever have a single row:
declare
v_item number;
begin
...
for r in (
select description
from table2 where pk2 = 2
)
loop
insert into table1 (notes) values (r.description) returning pk1 into v_item;
dbms_output.put_line(v_item);
...
end loop;
...
or with a collection... as #Dan has posted while I was answering this so I won't repeat! - though again that might be overkill or overly complicated for a single row.
In my PL/SQL script, how do I declare JUSTIFIC_REC when it represents a join?
SELECT *
INTO JUSTIFIC_REC
FROM TABLE1 A
INNER JOIN TABLE2 B
ON A.ID_JUSTIFIC = B.ID_JUSTIFIC ;
All I want is to insert into a TABLE3 a concatenated row:
INSERT INTO MOF_OUTACCDTL_REQ VALUES(
JUSTIFIC_rec.ENTRY_COMMENTS || ' ' || JUSTIFIC_rec.DESCRIPTION );
How should the declaration of JUSTIFIC_REC be like in the beginning of my script?
If it wasn't for the INNER JOIN , I would write something like:
JUSTIFIC_rec TABLE1%ROWTYPE;
If I understood correctly you can try with cursor rowtype like this (not sure if that is what you meant by declaring your variable type for the select with joins):
set serveroutput on;
declare
cursor cur is
SELECT ENTRY_COMMENTS, DESCRIPTION
FROM TABLE1 A
INNER JOIN TABLE2 B
ON A.ID_JUSTIFIC = B.ID_JUSTIFIC ;
justific_rec cur%ROWTYPE;
begin
open cur;
loop
fetch cur into justific_rec;
exit when cur%notfound;
dbms_output.put_line(justific_rec.entry_comments || ' ' || justific_rec.description);
end loop;
close cur;
end;
Answer to your question is itself in your question. You have to use the %row type
tow type attribute can be any of the below type:
rowtype_attribute :=
{cursor_name | cursor_variable_name | table_name} % ROWTYPE
cursor_name:-
An explicit cursor previously declared within the current scope.
cursor_variable_name:-
A PL/SQL strongly typed cursor variable, previously declared within the current scope.
table_name:-
A database table or view that must be accessible when the declaration is elaborated.
So code will looks like
DECLARE
CURSOR c1 IS
SELECT * FROM TABLE1 A
INNER JOIN TABLE2 B
ON A.ID_JUSTIFIC = B.ID_JUSTIFIC ;
justific_rec c1%ROWTYPE;
BEGIN
open c1;
loop
fetch c1 into justific_rec;
exit when c1%notfound;
INSERT INTO MOF_OUTACCDTL_REQ VALUES(
JUSTIFIC_rec.ENTRY_COMMENTS || ' ' || JUSTIFIC_rec.DESCRIPTION );
end loop;
close c1;
END;
/
You need to use BULK COLLECT INTO
SELECT *
BULK COLLECT INTO JUSTIFIC_REC
FROM TABLE1 A
INNER JOIN TABLE2 B
ON A.ID_JUSTIFIC = B.ID_JUSTIFIC ;
The JUSTIFIC_REC is needed to be TABLE type with appropriate columns
If all you're wanting to do is insert into a table based on a select statement, then there's no need for PL/SQL (by which I mean there's no need to open a cursor, fetch a row, process the row and then move on to the next row) - that's row-by-row aka slow-by-slow processing.
Instead, you can do this all in a single insert statement, e.g.:
INSERT INTO mof_outaccdtl_rec (<column being inserted into>) -- please always list the columns being inserted into; this avoids issues with your code when someone adds a column to the table.
SELECT entry_comments || ' ' || description -- please alias your columns! How do we know which table each column came from?
FROM table1 a
inner join table2 b on a.id_justific = b.id_justific;
If you wanted to embed this insert statement in PL/SQL, all you need to do is add a begin/end around it, like so:
begin
INSERT INTO mof_outaccdtl_rec (<column being inserted into>) -- please always list the columns being inserted into; this avoids issues with your code when someone adds a column to the table.
SELECT entry_comments || ' ' || description -- please alias your columns! How do we know which table each column came from?
FROM table1 a
inner join table2 b on a.id_justific = b.id_justific;
end;
/
This is the preferred solution when working with databases - think in sets (where possible) not procedurally (aka row-by-row processing). It is easier to maintain, the code is simpler to read and write, and it'll be more performant since you're not having to switch between PL/SQL and SQL several times with each row.
Context switching is bad for performance - think in terms of a bath full of water - is it quicker to empty the bath with a spoon (row-by-row processing), with a jug (batched rows - by - batched rows) or by pulling the plug (set-based)?
The database-schema (Source and target) are very large (each has over 350 tables). I have got the task to somehow merge these two tables into one. The data itself (whats in the tables) has to be migrated. I have to be careful that there are no double entries for primary keys before or while merging the schemata. Has anybody ever done that already and would be able to provide me his solution or could anyone help me get a approach to the task? My approaches all failed and my advisor just tells me to get help online :/
To my approach:
I have tried using the "all_constraints" table to get all pks from my db.
SELECT cols.table_name, cols.column_name, cols.position, cons.status, cons.owner
FROM all_constraints cons, all_cons_columns cols
WHERE cols.owner = 'DB'
AND cons.constraint_type = 'P'
AND cons.constraint_name = cols.constraint_name
AND cons.owner = cols.owner
ORDER BY cols.table_name, cols.position;
I also "know" that there has to be a sequence for the primary keys to add values to it:
CREATE SEQUENCE seq_pk_addition
MINVALUE 1
MAXVALUE 99999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 20;
Because I am a noob if it comes to pl/sql (or sql in general)
So how/what I should do next? :/
Here is a link for an ERD of the database: https://ufile.io/9tdoj
virus scan: https://www.virustotal.com/#/file/dbe5f418115e50313a2268fb33a924cc8cb57a43bc85b3bbf5f6a571b184627e/detection
As promised to help in my comment, i had prepared a dynamic code which you can try to get the data merged with the source and target tables. The logic is as below:
Step1: Get all the table names from the SOURCE schema. In the query below you can you need to replace the schema(owner) name respectively. For testing purpose i had taken only 1 table so when you run it,remove the table name filtering clause.
Step2: Get the constrained columns names for the table. This is used to prepared the ON clause which would be later used for MERGE statement.
Step3: Get the non-constrainted column names for the table. This would be used in UPDATE clause while using MERGE.
Step4: Prepare the insert list when the data doesnot match ON conditon of MERGE statement.
Read my inline comments to understand each step.
CREATE OR REPLACE PROCEDURE COPY_TABLE
AS
Type OBJ_NME is table of varchar2(100) index by pls_integer;
--To hold Table name
v_obj_nm OBJ_NME ;
--To hold Columns of table
v_col_nm OBJ_NME;
v_othr_col_nm OBJ_NME;
on_clause VARCHAR2(2000);
upd_clause VARCHAR2(4000);
cntr number:=0;
v_sql VARCHAR2(4000);
col_list1 VARCHAR2(4000);
col_list2 VARCHAR2(4000);
col_list3 VARCHAR2(4000);
col_list4 varchar2(4000);
col_list5 VARCHAR2(4000);
col_list6 VARCHAR2(4000);
col_list7 VARCHAR2(4000);
col_list8 varchar2(4000);
BEGIN
--Get Source table names
SELECT OBJECT_NAME
BULK COLLECT INTO v_obj_nm
FROM all_objects
WHERE owner LIKE 'RU%' -- Replace `RU%` with your Source schema name here
AND object_type = 'TABLE'
and object_name ='TEST'; --remove this condition if you want this to run for all tables
FOR I IN 1..v_obj_nm.count
loop
--Columns with Constraints
SELECT column_name
bulk collect into v_col_nm
FROM user_cons_columns
WHERE table_name = v_obj_nm(i);
--Columns without Constraints remain columns of table
SELECT *
BULK COLLECT INTO v_othr_col_nm
from (
SELECT column_name
FROM user_tab_cols
WHERE table_name = v_obj_nm(i)
MINUS
SELECT column_name
FROM user_cons_columns
WHERE table_name = v_obj_nm(i));
--Prepare Update Clause
FOR l IN 1..v_othr_col_nm.count
loop
cntr:=cntr+1;
upd_clause := 't1.'||v_othr_col_nm(l)||' = t2.' ||v_othr_col_nm(l);
upd_clause:=upd_clause ||' and ' ;
col_list1:= 't1.'||v_othr_col_nm(l) ||',';
col_list2:= col_list2||col_list1;
col_list5:= 't2.'||v_othr_col_nm(l) ||',';
col_list6:= col_list6||col_list5;
IF (cntr = v_othr_col_nm.count)
THEN
--dbms_output.put_line('YES');
upd_clause:=rtrim(upd_clause,' and');
col_list2:=rtrim( col_list2,',');
col_list6:=rtrim( col_list6,',');
END IF;
dbms_output.put_line(col_list2||col_list6);
--dbms_output.put_line(upd_clause);
End loop;
--Update caluse ends
cntr:=0; --Counter reset
--Prepare ON clause
FOR k IN 1..v_col_nm.count
loop
cntr:=cntr+1;
--dbms_output.put_line(v_col_nm.count || cntr);
on_clause := 't1.'||v_col_nm(k)||' = t2.' ||v_col_nm(k);
on_clause:=on_clause ||' and ' ;
col_list3:= 't1.'||v_col_nm(k) ||',';
col_list4:= col_list4||col_list3;
col_list7:= 't2.'||v_col_nm(k) ||',';
col_list8:= col_list8||col_list7;
IF (cntr = v_col_nm.count)
THEN
--dbms_output.put_line('YES');
on_clause:=rtrim(on_clause,' and');
col_list4:=rtrim( col_list4,',');
col_list8:=rtrim( col_list8,',');
end if;
dbms_output.put_line(col_list4||col_list8);
-- ON clause ends
--Prepare merge Statement
v_sql:= 'MERGE INTO '|| v_obj_nm(i)||' t1--put target schema name before v_obj_nm
USING (SELECT * FROM '|| v_obj_nm(i)||') t2-- put source schema name befire v_obj_nm here
ON ('||on_clause||')
WHEN MATCHED THEN
UPDATE
SET '||upd_clause ||
' WHEN NOT MATCHED THEN
INSERT
('||col_list2||','
||col_list4||
')
VALUES
('||col_list6||','
||col_list8||
')';
dbms_output.put_line(v_sql);
execute immediate v_sql;
end loop;
End loop;
END;
/
Execution:
exec COPY_TABLE
Output:
anonymous block completed
PS: i have tested this with a table with 2 columns out of which i was having unique key constraint .The DDL of table is as below:
At the end i wish you could understand my code(you being a noob) and implement something similar if the above fails for your requirement.
CREATE TABLE TEST
( COL2 NUMBER,
COLUMN1 VARCHAR2(20 BYTE),
CONSTRAINT TEST_UK1 UNIQUE (COLUMN1)
) ;
Oh dear! Normally, such a question would be quickly closed as "too broad", but we need to support victims of evil advisors!
As for the effort, I would need a week full time for an experienced expert plus two days quality checking for an experierenced QA engineer.
First of all, there is no way that such a complex data merge will work on the first try. That means that you'll need test copies of both schemas that can be easily rebuild. And you'll need a place to try it out. Normally this is done with an export of both schemas and an empty dev database.
Next, you need both schemas close enough to be able to compare the data. I'd do it with an import of the export files mentione above. If the schema names are identical than rename one during import.
Next, I'd doublecheck if the structure is really identical, with queries like
SELECT a.owner, a.table_name, b.owner, b.table_name
FROM all_tables a
FULL JOIN all_tables b
ON a.table_name = b.table_name
AND a.owner = 'SCHEMAA'
AND b.owner = 'SCHEMAB'
WHERE a.owner IS NULL or b.owner IS NULL;
Next, I'd check if the primary and unique keys have overlaps:
SELECT id FROM schemaa.table1
INTERSECT
SELECT id FROM schemab.table1;
As there are 300+ tables, I'd generate those queries:
DECLARE
stmt VARCHAR2(30000);
n NUMBER;
schema_a CONSTANT VARCHAR2(128 BYTE) := 'SCHEMAA';
schema_b CONSTANT VARCHAR2(128 BYTE) := 'SCHEMAB';
BEGIN
FOR c IN (SELECT owner, constraint_name, table_name,
(SELECT LISTAGG(column_name,',') WITHIN GROUP (ORDER BY position)
FROM all_cons_columns c
WHERE s.owner = c.owner
AND s.constraint_name = c.constraint_name) AS cols
FROM all_constraints s
WHERE s.constraint_type IN ('P')
AND s.owner = schema_a)
LOOP
dbms_output.put_line('Checking pk '||c.constraint_name||' on table '||c.table_name);
stmt := 'SELECT count(*) FROM '||schema_a||'.'||c.table_name
||' JOIN '||schema_b||'.'||c.table_name
|| ' USING ('||c.cols||')';
--dbms_output.put_line('Query '||stmt);
EXECUTE IMMEDIATE stmt INTO n;
dbms_output.put_line('Found '||n||' overlapping primary keys in table '||c.table_name);
END LOOP;
END;
/
1st of all, for 350 tables, most probably, would need an dynamic SQL.
Declare a CURSOR or a COLLECTION - table of VARCHAR2 with all table names.
Declare a string variable to build the dynamic SQL.
loop through the entire list of the tables name and, for each table generates a string which will be executed as SQL with EXECUTE IMMEDIATE command.
The dynamic SQL which will be built, should insert the values from source table into the target table. In case the PK already exists, in target table, should be checked the field which represents the last updated date and if in source table it is bigger than in target table, then perform an update in target table, else, do nothing.
I have to move the data from table A to table B (they have almost the same fields).
What I have now is a cursor, that iterates over the records that has to be moved, insert one record in the destination table and updates the is_processed field in the source table.
Something like:
BEGIN
FOR i IN (SELECT *
FROM A
WHERE A.IS_PROCESSED = 'N')
LOOP
INSERT INTO B(...) VALUES(i....);
UPDATE A SET IS_PROCESSED = 'Y' WHERE A.ID = i.ID;
COMMIT;
END LOOP;
END;
The questions is, how to do the same using INSERT FROM SELECT(without the loop) and then update IS_PROCESSED of all the moved rows?
There is no BULK COLLECT INTO for INSERT .. SELECT
May be you can try this. I don't think it's better than your LOOP.
DECLARE
TYPE l_src_tp IS TABLE OF t_source%ROWTYPE;
l_src_rows l_src_tp;
BEGIN
SELECT *
BULK COLLECT INTO l_src_rows
FROM t_source;
FORALL c IN l_src_rows.first .. l_src_rows.last
INSERT INTO t_dest (td_id, td_value)
VALUES (l_src_rows(c).ts_id, l_src_rows(c).ts_value);
FORALL c IN l_src_rows.first .. l_src_rows.last
UPDATE t_source
SET ts_is_proccesed = 'Y'
WHERE ts_id = l_src_rows(c).ts_id;
END;
If you reverse the order and first make update and then insert you can use:
DECLARE
ids sys.odcinumberlist;
BEGIN
UPDATE a SET is_processed = 'Y' WHERE is_processed = 'N' RETURNING id BULK COLLECT INTO ids;
INSERT INTO b (id) SELECT column_value id FROM TABLE(ids);
COMMIT;
END;
In the SELECT you can join the ids table and get other data from other tables you want to insert into b.
Hello I should prefer pure SQL rather than PLSQL. I don't know why another update statement is requires for this simpler task. Let me know if this helps.
INSERT INTO <new table>
SELECT col1,col2,
.....,'Y'
FROM <old_table>
WHERE processed_in = 'N';
Image I have the following queries in a package:
SELECT col1 FROM table WHERE id=5;
SELECT otherCol FROM otherTable WHERE id=78;
I want to return one record with two columns containing the values 'col1' and 'otherCol'.
In my case, I'd have lots of subqueries and DECODE statements so for readability I want to split it up into different queries something ala:
var result = {}; -- Unfortunately PL/SQL doesn't cope very well with JavaScript.
SELECT col1 INTO someVar FROM table WHERE id=5;
IF someVar = 1 THEN
result.X = SomeQuery();
ELSE
result.X = SomeEntirelyDifferentQuery();
END IF;
SELECT col INTO result.Z FROM tab1;
SELECT coz INTO result.Q FROM tab2;
IF result.Z IS NULL THEN
result.B = Query1();
ELSE
result.B = Query2();
END IF;
... and so on.
RETURN result;
So basically I want to create an "empty record" and then based on some conditions assign different values to columns in that record
This could be solved with DECODE but the resulting query is both not very readable nor very performant.
You have to define an object type the function can return.
CREATE OR REPLACE TYPE fancy_function_result_row as object
(
X number,
Y number,
Q date,
Z varchar2(30)
);
create or replace function fancy_function(...)
return fancy_function_result_row
as
my_record fancy_function_result_row := fancy_function_result_row();
begin
my_record.X := 1;
etc...
end;
If you want to insert the record into a table, it might be a lot easier simply defining a rowtype of that table.
declare
row_my_table my_table%rowtype;
begin
row_my_table.X := etc..
insert into my_table values row_my_table;
end;
--To concat the values 'col1' and 'otherCol':
with r1 as (select col1 from table where id=5),
r2 as (select otherCol from otherTable WHERE id=78),
select r1.col1 || ' concat with ' || r2.othercol from r1, r2
--To do this condition using DECODE:
SELECT DECODE (col1,
1,
(SELECT 1 FROM query1),
(SELECT 1 FROM DifferentQuery)
)
FROM TABLE