PL/SQL Extract Column Names and use in select statment - oracle

not sure if this is possible at all but im trying to do this with as little manual work as possible.
I have a table with 150 columns based on different combinations of factors.
I wish to extract the column names where a certain certain string is inside the column name.
I have done the following which does this. This is a basic example of what I have
--Create the table
Create Table temp
(id number,
Fac1_Fac2_Fac_3_Fac4_Fac5 number,
Fac1_Fac6_Fac_3_Fac4_Fac5 number,
Fac1_Fac6_Fac_7_Fac4_Fac5 number,
Fac1_Fac9_Fac_3_Fac4_Fac5 number,
Fac1_Fac10_Fac_3_Fac4_Fac5 number,
Fac1_Fac2_Fac_3_Fac11_Fac5 number,
Fac1_Fac2_Fac_3_Fac4_Fac12 number,
Fac13_Fac2_Fac_3_Fac4_Fac5 number);
Insert into temp Values (1,35634,3243,343,564,56,4635,3,334);
Insert into temp Values (2,3434234,3243,343,564,56,435,3,34234);
Insert into temp Values (3,5555,3243,33,564,56,435,3,3434);
Insert into temp Values (4,34234,343,343,564,56,4335,3,34);
commit;
--Extract Column Names
Select * from (
Select COLUMN_NAME
from user_tab_cols
where lower(table_name) ='temp'
)
where column_name like '%FAC13%'
--This is what I want to automate.
Select id, FAC13_FAC2_FAC_3_FAC4_FAC5
From temp
--I want the column name to come fron the select statment above as there may be lots of names.
Basically, I want to select all the rows from my table that have Fac13 in the column name all in one query if possible.
Thanks

I do not think you can do that in one query. First, your extract column names query can be simplified to one query as a cursor, and then use a dynamic select statement as follows:
CREATE OR REPLACE proc_dyn_select IS
CURSOR c1 IS
SELECT column_name
FROM user_tab_cols
WHERE LOWER(table_name) ='temp' and column_name LIKE '%FAC13%';
cols c1%ROWTYPE;
sqlstmt VARCHAR2(2000);
BEGIN
OPEN c1;
LOOP
FETCH c1 into cols;
EXIT WHEN c1%NOTFOUND;
sqlstmt := sqlstmt ||cols.column_name||',';
END LOOP;
CLOSE c1;
sqlstmt := 'select '||substr(sqlstmt, 1, length(sqlstmt)-1)||' FROM temp';
EXECUTE IMMEDIATE sqlstmt;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('error '||sqlerrm);
END;
/
Explanation
First, the cursor will store the columns that meet your conditions (to be from the table temp and the column names have the sub string FAC13. Then in execution section (after BEGIN), you will build your query dynamically using columns names stored in the cursor c1. With each round of the loop, a column name is added as a string and concatenated with a comma. So a string of columns will be built like this 'col1, col2, col3, ... coln,'. The string is stored in sqlstmt variable.
After the loop end, you amend the string to build sql statement, by adding the keywords SELECT, FROM and table name. However, we remove the last character of the sqlstmt variable, as it is an extra comma.
EXECUTE IMMEDIATE statement, will run the query stored in sqlstmt.
By using a procedure, you can always pass parameters, such that this procedure can perform any dynamic sql statement you want.

Related

how can I dynamically select a column from a list of tables and output timestamp results?

I have a large number of identical tables containing columns "eventtime" and "eventvalue"
eventtime is TIMESTAMP(6) and eventvalue is NUMBER (but I don't care about it.)
table names are like data_001, data_002, data_003, etc. all identically defined but it doesn't matter because I have another table called table_list which contains a list of these tables by name. (They are dynamically generated.)
So "select distinct data_table from table_list" gets you output like
data_001
data_002
data_003
and so on.
I am trying to extract the minimum timestamp from the eventtime column of each table and output it to DBMS_OUTPUT and I can't get it to work.
DECLARE
mystr VARCHAR(1000);
v_mystamp TIMESTAMP(6);
cursor c1 is
select distinct data_table from table_list order by data_table asc;
BEGIN
FOR rec IN c1 LOOP
mystr := 'select min(eventtime) into :v_mystamp from ' || rec.data_table;
execute immediate mystr;
DBMS_OUTPUT.PUT_LINE(rec.data_table);
DBMS_OUTPUT.PUT_LINE(v_mystamp);
END LOOP;
END;
The expected output is a list of tables, and the min(eventtime) from each table.
What I am getting is a list of tables, and then a blank line after each, and I am wondering what I am doing wrong... somehow it is not capturing the min(eventtime) properly (or not outputting it properly?) but I am not sure why.
Put INTO clause into execute immediate:
mystr := 'select min(eventtime) from ' || rec.data_table;
execute immediate mystr into v_mystamp ;

Insert into not working on plsql in oracle

declare
vquery long;
cursor c1 is
select * from temp_name;
begin
for i in c1
loop
vquery :='INSERT INTO ot.temp_new(id)
select '''||i.id||''' from ot.customers';
dbms_output.put_line(i.id);
end loop;
end;
/
Output of select * from temp_name is :
ID
--------------------------------------------------------------------------------
customer_id
1 row selected.
I have customers table which has customer_id column.I want to insert all the customer_id into temp_new table but it is not being inserted. The PLSQL block executes successfully but the temp_new table is empty.
The output of dbms_output.put_line(i.id); is
customer_id
What is wrong there?
The main problem is that you generate a dynamic statement that you never execute; at some point you need to do:
execute immediate vquery;
But there are other problems. If you output the generated vquery string you'll see it contains:
INSERT INTO ot.temp_new(id)
select 'customer_id' from ot.customers
which means that for every row in customers you'll get one row in temp_new with ID set to the same fixed literal 'customer_id'. It's unlikely that's what you want; if customer_id is a column name from customers then it shouldn't be in single quotes.
As #mathguy suggested, long is not a sensible data type to use; you could use a CLOB but only really need a varchar2 here. So something more like this, where I've also switched to use an implicit cursor:
declare
l_stmt varchar2(4000);
begin
for i in (select id from temp_name)
loop
l_stmt := 'INSERT INTO temp_new(id) select '||i.id||' from customers';
dbms_output.put_line(i.id);
dbms_output.put_line(l_stmt);
execute immediate l_stmt;
end loop;
end;
/
db<>fiddle
The loop doesn't really make sense though; if your temp_name table had multiple rows with different column names, you'd try to insert the corresponding values from those columns in the customers table into multiple rows in temp_new, all in the same id column, as shown in this db<>fiddle.
I guess this is the starting point for something more complicated, but still seems a little odd.

What is the solution for the errors in this PL/SQL code? [duplicate]

The database-schema (Source and target) are very large (each has over 350 tables). I have got the task to somehow merge these two tables into one. The data itself (whats in the tables) has to be migrated. I have to be careful that there are no double entries for primary keys before or while merging the schemata. Has anybody ever done that already and would be able to provide me his solution or could anyone help me get a approach to the task? My approaches all failed and my advisor just tells me to get help online :/
To my approach:
I have tried using the "all_constraints" table to get all pks from my db.
SELECT cols.table_name, cols.column_name, cols.position, cons.status, cons.owner
FROM all_constraints cons, all_cons_columns cols
WHERE cols.owner = 'DB'
AND cons.constraint_type = 'P'
AND cons.constraint_name = cols.constraint_name
AND cons.owner = cols.owner
ORDER BY cols.table_name, cols.position;
I also "know" that there has to be a sequence for the primary keys to add values to it:
CREATE SEQUENCE seq_pk_addition
MINVALUE 1
MAXVALUE 99999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 20;
Because I am a noob if it comes to pl/sql (or sql in general)
So how/what I should do next? :/
Here is a link for an ERD of the database: https://ufile.io/9tdoj
virus scan: https://www.virustotal.com/#/file/dbe5f418115e50313a2268fb33a924cc8cb57a43bc85b3bbf5f6a571b184627e/detection
As promised to help in my comment, i had prepared a dynamic code which you can try to get the data merged with the source and target tables. The logic is as below:
Step1: Get all the table names from the SOURCE schema. In the query below you can you need to replace the schema(owner) name respectively. For testing purpose i had taken only 1 table so when you run it,remove the table name filtering clause.
Step2: Get the constrained columns names for the table. This is used to prepared the ON clause which would be later used for MERGE statement.
Step3: Get the non-constrainted column names for the table. This would be used in UPDATE clause while using MERGE.
Step4: Prepare the insert list when the data doesnot match ON conditon of MERGE statement.
Read my inline comments to understand each step.
CREATE OR REPLACE PROCEDURE COPY_TABLE
AS
Type OBJ_NME is table of varchar2(100) index by pls_integer;
--To hold Table name
v_obj_nm OBJ_NME ;
--To hold Columns of table
v_col_nm OBJ_NME;
v_othr_col_nm OBJ_NME;
on_clause VARCHAR2(2000);
upd_clause VARCHAR2(4000);
cntr number:=0;
v_sql VARCHAR2(4000);
col_list1 VARCHAR2(4000);
col_list2 VARCHAR2(4000);
col_list3 VARCHAR2(4000);
col_list4 varchar2(4000);
col_list5 VARCHAR2(4000);
col_list6 VARCHAR2(4000);
col_list7 VARCHAR2(4000);
col_list8 varchar2(4000);
BEGIN
--Get Source table names
SELECT OBJECT_NAME
BULK COLLECT INTO v_obj_nm
FROM all_objects
WHERE owner LIKE 'RU%' -- Replace `RU%` with your Source schema name here
AND object_type = 'TABLE'
and object_name ='TEST'; --remove this condition if you want this to run for all tables
FOR I IN 1..v_obj_nm.count
loop
--Columns with Constraints
SELECT column_name
bulk collect into v_col_nm
FROM user_cons_columns
WHERE table_name = v_obj_nm(i);
--Columns without Constraints remain columns of table
SELECT *
BULK COLLECT INTO v_othr_col_nm
from (
SELECT column_name
FROM user_tab_cols
WHERE table_name = v_obj_nm(i)
MINUS
SELECT column_name
FROM user_cons_columns
WHERE table_name = v_obj_nm(i));
--Prepare Update Clause
FOR l IN 1..v_othr_col_nm.count
loop
cntr:=cntr+1;
upd_clause := 't1.'||v_othr_col_nm(l)||' = t2.' ||v_othr_col_nm(l);
upd_clause:=upd_clause ||' and ' ;
col_list1:= 't1.'||v_othr_col_nm(l) ||',';
col_list2:= col_list2||col_list1;
col_list5:= 't2.'||v_othr_col_nm(l) ||',';
col_list6:= col_list6||col_list5;
IF (cntr = v_othr_col_nm.count)
THEN
--dbms_output.put_line('YES');
upd_clause:=rtrim(upd_clause,' and');
col_list2:=rtrim( col_list2,',');
col_list6:=rtrim( col_list6,',');
END IF;
dbms_output.put_line(col_list2||col_list6);
--dbms_output.put_line(upd_clause);
End loop;
--Update caluse ends
cntr:=0; --Counter reset
--Prepare ON clause
FOR k IN 1..v_col_nm.count
loop
cntr:=cntr+1;
--dbms_output.put_line(v_col_nm.count || cntr);
on_clause := 't1.'||v_col_nm(k)||' = t2.' ||v_col_nm(k);
on_clause:=on_clause ||' and ' ;
col_list3:= 't1.'||v_col_nm(k) ||',';
col_list4:= col_list4||col_list3;
col_list7:= 't2.'||v_col_nm(k) ||',';
col_list8:= col_list8||col_list7;
IF (cntr = v_col_nm.count)
THEN
--dbms_output.put_line('YES');
on_clause:=rtrim(on_clause,' and');
col_list4:=rtrim( col_list4,',');
col_list8:=rtrim( col_list8,',');
end if;
dbms_output.put_line(col_list4||col_list8);
-- ON clause ends
--Prepare merge Statement
v_sql:= 'MERGE INTO '|| v_obj_nm(i)||' t1--put target schema name before v_obj_nm
USING (SELECT * FROM '|| v_obj_nm(i)||') t2-- put source schema name befire v_obj_nm here
ON ('||on_clause||')
WHEN MATCHED THEN
UPDATE
SET '||upd_clause ||
' WHEN NOT MATCHED THEN
INSERT
('||col_list2||','
||col_list4||
')
VALUES
('||col_list6||','
||col_list8||
')';
dbms_output.put_line(v_sql);
execute immediate v_sql;
end loop;
End loop;
END;
/
Execution:
exec COPY_TABLE
Output:
anonymous block completed
PS: i have tested this with a table with 2 columns out of which i was having unique key constraint .The DDL of table is as below:
At the end i wish you could understand my code(you being a noob) and implement something similar if the above fails for your requirement.
CREATE TABLE TEST
( COL2 NUMBER,
COLUMN1 VARCHAR2(20 BYTE),
CONSTRAINT TEST_UK1 UNIQUE (COLUMN1)
) ;
Oh dear! Normally, such a question would be quickly closed as "too broad", but we need to support victims of evil advisors!
As for the effort, I would need a week full time for an experienced expert plus two days quality checking for an experierenced QA engineer.
First of all, there is no way that such a complex data merge will work on the first try. That means that you'll need test copies of both schemas that can be easily rebuild. And you'll need a place to try it out. Normally this is done with an export of both schemas and an empty dev database.
Next, you need both schemas close enough to be able to compare the data. I'd do it with an import of the export files mentione above. If the schema names are identical than rename one during import.
Next, I'd doublecheck if the structure is really identical, with queries like
SELECT a.owner, a.table_name, b.owner, b.table_name
FROM all_tables a
FULL JOIN all_tables b
ON a.table_name = b.table_name
AND a.owner = 'SCHEMAA'
AND b.owner = 'SCHEMAB'
WHERE a.owner IS NULL or b.owner IS NULL;
Next, I'd check if the primary and unique keys have overlaps:
SELECT id FROM schemaa.table1
INTERSECT
SELECT id FROM schemab.table1;
As there are 300+ tables, I'd generate those queries:
DECLARE
stmt VARCHAR2(30000);
n NUMBER;
schema_a CONSTANT VARCHAR2(128 BYTE) := 'SCHEMAA';
schema_b CONSTANT VARCHAR2(128 BYTE) := 'SCHEMAB';
BEGIN
FOR c IN (SELECT owner, constraint_name, table_name,
(SELECT LISTAGG(column_name,',') WITHIN GROUP (ORDER BY position)
FROM all_cons_columns c
WHERE s.owner = c.owner
AND s.constraint_name = c.constraint_name) AS cols
FROM all_constraints s
WHERE s.constraint_type IN ('P')
AND s.owner = schema_a)
LOOP
dbms_output.put_line('Checking pk '||c.constraint_name||' on table '||c.table_name);
stmt := 'SELECT count(*) FROM '||schema_a||'.'||c.table_name
||' JOIN '||schema_b||'.'||c.table_name
|| ' USING ('||c.cols||')';
--dbms_output.put_line('Query '||stmt);
EXECUTE IMMEDIATE stmt INTO n;
dbms_output.put_line('Found '||n||' overlapping primary keys in table '||c.table_name);
END LOOP;
END;
/
1st of all, for 350 tables, most probably, would need an dynamic SQL.
Declare a CURSOR or a COLLECTION - table of VARCHAR2 with all table names.
Declare a string variable to build the dynamic SQL.
loop through the entire list of the tables name and, for each table generates a string which will be executed as SQL with EXECUTE IMMEDIATE command.
The dynamic SQL which will be built, should insert the values from source table into the target table. In case the PK already exists, in target table, should be checked the field which represents the last updated date and if in source table it is bigger than in target table, then perform an update in target table, else, do nothing.

FORALL+ EXECUTE IMMEDIATE + INSERT Into tbl SELECT

I have got stuck in below and getting syntax error - Please help.
Basically I am using a collection to store few department ids and then would like to use these department ids as a filter condition while inserting data into emp table in FORALL statement.
Below is sample code:
while compiling this code i am getting error, my requirement is to use INSERT INTO table select * from table and cannot avoid it so please suggest.
create or replace Procedure abc(dblink VARCHAR2)
CURSOR dept_id is select dept_ids from dept;
TYPE nt_dept_detail IS TABLE OF VARCHAR2(25);
l_dept_array nt_dept_detail;
Begin
OPEN dept_id;
FETCH dept_id BULK COLLECT INTO l_dept_array;
IF l_dept_array.COUNT() > 0 THEN
FORALL i IN 1..l_dept_array.COUNT SAVE EXCEPTIONS
EXECUTE IMMEDIATE 'INSERT INTO stg_emp SELECT
Dept,''DEPT_10'' FROM dept_emp'||dblink||' WHERE
dept_id = '||l_dept_array(i)||'';
COMMIT;
END IF;
CLOSE dept_id;
end abc;
Why are you bothering to use cursors, arrays etc in the first place? Why can't you just do a simple insert as select?
Problems with your procedure as listed above:
You don't declare procedures like Procedure abc () - for a standalone procedure, you would do create or replace procedure abc as, or in a package: procedure abc is
You reference a variable called "dblink" that isn't declared anywhere.
You didn't put end abc; at the end of your procedure (I hope that was just a mis-c&p?)
You're effectively doing a simple insert as select, but you're way over-complicating it, plus you're making your code less performant.
You've not listed the column names that you're trying to insert into; if stg_emp has more than two columns or ends up having columns added, your code is going to fail.
Assuming your dblink name isn't known until runtime, then here's something that would do what you're after:
create Procedure abc (dblink in varchar2)
is
begin
execute immediate 'insert into stg_emp select dept, ''DEPT_10'' from dept_emp#'||dblink||
' where dept_id in (select dept_ids from dept)';
commit;
end abc;
/
If, however, you do know the dblink name, then you'd just get rid of the execute immediate and do:
create Procedure abc (dblink in varchar2)
is
begin
insert into stg_emp -- best to list the column names you're inserting into here
select dept, 'DEPT_10'
from dept_emp#dblink
where dept_id in (select dept_ids from dept);
commit;
end abc;
/
There appears te be a lot wrong with this code.
1) why the execute immediate? Is there any explicit requirement for that? No, than don't use it
2) where is the dblink variable declared?
3) as Boneist already stated, why not a simple subselect in the insert statement?
INSERT INTO stg_emp SELECT
Dept,'DEPT_10' FROM dept_emp#dblink WHERE
dept_id in (select dept_ids from dept );
For one, it would make the code actually readable ;)

iterating thru cursor in Oracle

I've found a good question at https://dba.stackexchange.com/questions/3587/oracle-automate-export-unload-of-data. Is it valid to use such a construction:
FOR r IN (SELECT * FROM table) LOOP
UTL_FILE.PUT_LINE(lfFilelog, r.row);
END LOOP;
I'm trying to use something like this:
CREATE OR REPLACE PROCEDURE p_name(DESTFOLDER in varchar2, FILENAME in varchar2)
IS
V_FILEHANDLE UTL_FILE.FILE_TYPE;
CURSOR dataset IS
SELECT
field1,
field2,
fieldN
FROM
table1,
table2,
(SELECT field3 from table3);
-- WHERE CLAUSE ... and so on..
BEGIN
V_FILEHANDLE := UTL_FILE.FOPEN(DESTFOLDER, FILENAME, 'w');
FOR R IN dataset LOOP
UTL_FILE.PUT_LINE(V_FILEHANDLE, R.ROW);
END LOOP;
END;
/
and getting pls-00302 error which states that I should have defined ROW component. So as far as I undrestand this field should already exist in the query. Am I right?
Can I simply write a row from the cursor?
The answer mentionned is not complete, I think it was given as an example (pseudo-code) that lacks implementation details.
As it is:
your SELECT clause is invalid, you aren't selecting anything. What do you want to select?
the construct XX.row where xx is a cursor doesn't exist
furthermore, the UTL_FILE.get_line procedure accepts a VARCHAR2 as its second argument, not any kind of rowtype
you can't name a table table (although you could name it "table").
Given a table mytable(col1, col2, ... , colN) you could write:
CREATE OR REPLACE PROCEDURE p_name()
IS
V_FILEHANDLE UTL_FILE.FILE_TYPE;
CURSOR dataset IS SELECT col1, col2, /*...*/ coln FROM mytable;
BEGIN
/*utl_file.fopen maybe?*/
FOR R IN dataset LOOP
UTL_FILE.PUT_LINE(V_FILEHANDLE, R.col1 ||';'|| r.col2 /*...*/ || r.coln);
END LOOP;
END;

Resources