How to create a table and insert in the same statement - oracle

I'm new to Oracle and I'm struggling with this:
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name like 'Newtable';
IF(cnt=0) THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable ....etc';
END IF;
COMMIT;
SELECT COUNT(*) INTO cnt FROM Newtable where id='something'
IF (cnt=0) THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable ....etc';
END IF;
END;
This keeps crashing and gives me the "PL/SQL: ORA-00942:table or view does not exist" on the insert-line. How can I avoid this? Or what am I doing wrong? I want these two statements (in reality it's a lot more of course) in a single transaction.

It isn't the insert that is the problem, it's the select two lines before. You have three statements within the block, not two. You're selecting from the same new table that doesn't exist yet. You've avoided that in the insert by making that dynamic, but you need to do the same for the select:
EXECUTE IMMEDIATE q'[SELECT COUNT(*) FROM Newtable where id='something']'
INTO cnt;
SQL Fiddle.
Creating a table at runtime seems wrong though. You said 'for safety issues the table can only exist if it's filled with the correct dataset', which doesn't entirely make sense to me - even if this block is creating and populating it in one go, anything that relies on it will fail or be invalidated until this runs. If this is part of the schema creation then making it dynamic doesn't seem to add much. You also said you wanted both to happen in one transaction, but the DDL will do an implicit commit, you can't roll back DDL, and your manual commit will start a new transaction for the insert(s) anyway. Perhaps you mean the inserts shouldn't happen if the table creation fails - but they would fail anyway, whether they're in the same block or not. It seems a bit odd, anyway.
Also, using all_tables for the check could still cause this to behave oddly. If that table exists in another schema, you create will be skipped, but you select and insert might still fail as they might not be able to see, or won't look for, the other schema version. Using user_tables or adding an owner check might be a bit safer.

Try the following approach, i.e. create and insert are in two different blocks
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt
FROM all_tables
WHERE table_name LIKE 'Newtable';
IF (cnt = 0)
THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable(c1 varchar2(256))';
END IF;
END;
DECLARE
cnt2 NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt2
FROM newtable
WHERE c1 = 'jack';
IF (cnt2 = 0)
THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable values(''jill'')';
END IF;
END;

Oracle handles the execution of a block in two steps:
First it parses the block and compiles it in an internal representation (so called "P code")
It then runs the P code (it may be interpreted or compiled to machine code, depending on your architecture and Oracle version)
For compiling the code, Oracle must know the names (and the schema!) of the referenced tables. Your table doesn't exist yet, hence there is no schema and the code does not compile.
To your intention to create the tables in one big transaction: This will not work. Oracle always implicitly commits the current transaction before and after a DDL statement (create table, alter table, truncate table(!) etc.). So after each create table, Oracle will commit the current transaction and starts a new one.

Related

Trying to create a DML file of the owner inserts Oracle

I am trying to create a DML file that contains all the inserts to a database using only a script and asking only for the owner name, I found some documentation about the creation of files in Oracle and some other about how to get the insert statements.
This is the query that gets the inserts
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
And this is what I`m trying to do in order to create the file with the selected rows from the query
DECLARE
F1 UTL_FILE.FILE_TYPE;
CURSOR C_TABLAS IS
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'BETA';
V_INSERT VARCHAR2(32767);
BEGIN
OPEN C_TABLAS;
LOOP
FETCH C_TABLAS INTO V_INSERT;
EXIT WHEN C_TABLAS%NOTFOUND;
F1 := UTL_FILE.FOPEN('D:\Desktop\CENFOTEC\4 Cuatrimestre\Programación de Bases de Datos\Proyecto\FileTests','TestUno.dml','W');
UTL_FILE.PUT_LINE(F1, V_INSERT);
UTL_FILE.FCLOSE (F1);
END LOOP;
CLOSE C_TABLAS;
END;
I'm having trouble with the fetch, I'm getting this error: wrong number of values in the INTO list of a FETCH statement
I know that it is a basic one, but I can't figure out how many columns I am getting from the query above
Although I'm trying this way i wouldn't mind changing it, I need to create a DML file of all the inserts needed to replicate the database of the given user. Thanks a lot
In SQL Developer, when you use:
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
Then the /*insert*/ hint is processed by SQL Developer on the client-side and converts the returned result set into DML statements.
To quote #ThatJeffSmith in his answer where he gave the above solution:
here is a SQL Developer-specific solution
That behaviour is specific to the SQL Developer client application.
In the Oracle database, when you use:
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
Then /*insert*/ is an inline comment and it is IGNORED and has zero effect on the output of the query.
Therefore, when you do:
DECLARE
F1 UTL_FILE.FILE_TYPE;
CURSOR C_TABLAS IS
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'BETA';
V_INSERT VARCHAR2(32767);
BEGIN
OPEN C_TABLAS;
LOOP
FETCH C_TABLAS INTO V_INSERT;
EXIT WHEN C_TABLAS%NOTFOUND;
F1 := UTL_FILE.FOPEN('D:\Desktop\CENFOTEC\4 Cuatrimestre\Programación de Bases de Datos\Proyecto\FileTests','TestUno.dml','W');
UTL_FILE.PUT_LINE(F1, V_INSERT);
UTL_FILE.FCLOSE (F1);
END LOOP;
CLOSE C_TABLAS;
END;
/
The PL/SQL anonymous block will be processed by the database's PL/SQL engine on the server-side and it will context-switch and pass the cursor's SQL to the database's SQL engine where it will be run and the /*insert*/ comment is ignored and it will return all the columns.
I can't figure out how many columns I am getting from the query above.
One column for every column in the ALL_TABS_COLUMNS table. You can use:
SELECT * FROM all_tabs_columns FETCH FIRST ROW ONLY
And then count the columns. I made it 37 columns (but might have miscounted).
However
Trying to generate INSERT statements that correspond to all the rows in the ALL_TAB_COLUMNS table so that you can recreate the database is WRONG. You need to generate the DDL statements for each table and not generate DML statements to try to modify a data dictionary table (which, likely as not, if you try to modify data dictionary tables will leave your database in an unusable state).
If you want to recreate the database then use the answers in this question or backup the database and then restore it to the new database.

how to run the stored procedure in batch mode or in run it in parallel processing

We are iterating 100k+ records from global temporary table.below stored procedure will iterate all records from glogal temp table one by one and has to process below three steps.
to see whether product is exists or not
to see whether product inside the assets are having the 'category' or not.
to see whether the assets are having file names starts with '%pdf%' or not.
So each record has to process these 3 steps and final document names will be stored in the table for the successful record. If any error comes in any of the steps then error message will be stored for that record.
Below stored procedure is taking long time to process Because its processing sequentially.
Is there any way to make this process faster in the stored procedure itself by doing batch process?
If it's not possible in stored procedure then can we change this code into Java and run this code in multi threaded mode? like creating 10 threads and each thread will take one record concurrently and process this code. I would be happy if somebody gives some pseudo code.
which approach is going to suggest?
DECLARE
V_NODE_ID VARCHAR2(20);
V_FILENAME VARCHAR2(100);
V_CATEGORY_COUNT INTEGER :=0;
FINAL_FILNAME VARCHAR2(2000);
V_FINAL_ERRORMESSAGE VARCHAR2(2000);
CURSOR C1 IS
SELECT isbn FROM GT_ADD_ISBNS GT;
CURSOR C2(v_isbn in varchar2) IS
SELECT ANP.NODE_ID NODE_ID
FROM
table1 ANP,
table2 ANPP,
table3 AN
WHERE
ANP.NODE_ID=AN.ID AND
ANPP.NODE_ID=ANP.NODE_ID AND
AN.NAME_ID =26 AND
ANP.CATEORGY='category' AND
ANP.QNAME_ID='categories' AND
ANP.NODE_ID IN(SELECT CHILD_NODE_ID
FROM TABLE_ASSOC START WITH PARENT_NODE_ID IN(v_isbn)
CONNECT BY PRIOR CHILD_NODE_ID = PARENT_NODE_ID);
BEGIN
--Iterating all Products
FOR R1 IN C1
LOOP
FINAL_FILNAME :='';
BEGIN
--To check whether Product is exists or not
SELECT AN.ID INTO V_NODE_ID
FROM TABLE1 AN,
TABLE2 ANP
WHERE
AN.ID=ANP.NODE_ID AND
ANP.VALUE in(R1.ISBN);
V_CATEGORY_COUNT :=0;
V_FINAL_ERRORMESSAGE :='';
--To check Whether Product inside the assets are having the 'category' is applied or not
FOR R2 IN C2(R1.ISBN)
LOOP
V_CATEGORY_COUNT := V_CATEGORY_COUNT+1;
BEGIN
--In this Logic Product inside the assets have applied the 'category' But those assets are having documents LIKE '%pdf%' or not
SELECT ANP.STRING_VALUE into V_FILENAME
FROM
table1 ANP,
table2 ANPP,
table3 ACD
WHERE
ANP.QNAME_ID=21 AND
ACD.ID=ANPP.LONG_VALUE
ANP.NODE_ID=ANPP.NODE_ID AND
ANPP.QNAME_ID=36 AND
ANP.STRING_VALUE LIKE '%pdf%' AND
ANP.NODE_ID=R2.NODE_ID;
FINAL_FILNAME := FINAL_FILNAME || V_FILENAME ||',';
EXCEPTION WHEN
NO_DATA_FOUND THEN
V_FINAL_ERRORMESSAGE:=V_FINAL_ERRORMESSAGE|| 'Category is applied for this Product But for the asset:'|| R2.NODE_ID || ':Documents[LIKE %pdf%] were not found ;';
UPDATE GT_ADD_ISBNS SET ERROR_MESSAGE= V_FINAL_ERRORMESSAGE WHERE ISBN= R1.ISBN;
END;--Iterating for each NODEID
END LOOP;--Iterating the assets[Nodes] for each product of catgeory
-- DBMS_OUTPUT.PUT_LINE('R1.ISBN:' || R1.ISBN ||'::V_CATEGORY_COUNT:' || V_CATEGORY_COUNT);
IF(V_CATEGORY_COUNT = 0) THEN
UPDATE GT_ADD_ISBNS SET ERROR_MESSAGE= 'Category is not applied to none of the Assets for this Product' WHERE ISBN= R1.ISBN;
END IF;
EXCEPTION WHEN
NO_DATA_FOUND THEN
UPDATE GT_ADD_ISBNS SET ERROR_MESSAGE= 'Product is not Found:' WHERE ISBN= R1.ISBN;
END;
-- DBMS_OUTPUT.PUT_LINE( R1.ISBN || 'Final documents:'||FINAL_FILNAME);
UPDATE GT_ADD_ISBNS SET FILENAME=FINAL_FILNAME WHERE ISBN= R1.ISBN;
COMMIT;
END LOOP;--looping gt_isbns
END;
You have a number of potential performance hits. Here's one:
"We are iterating 100k+ records from global temporary table"
Global temporary tables can be pretty slow. Populating them means writing all that data to disk; reading from them means reading from disk. That's a lot of I/O which might be avoidable. Also, GTTs use the temporary tablespace so you may be in contention with other sessions doing large sorts.
Here's another red flag:
FOR R1 IN C1 LOOP
... FOR R2 IN C2(R1.ISBN) LOOP
SQL is a set-based language. It is optimised for joining tables and returning sets of data in a highly-performative fashion. Nested cursor loops mean row-by-row processing which is undoubtedly easier to code but may be orders of magnitude slower than the equivalent set operation would be.
--To check whether Product is exists or not
You have several queries selecting from the same tables (AN, 'ANP) using the same criteria (isbn`). Perhaps all these duplicates are the only way of validating your business rules but it seems unlikely.
FINAL_FILNAME := FINAL_FILNAME || V_FILENAME ||',';
Maybe you could rewrite your query to use listagg() instead of using procedural logic to concatenate a string?
UPDATE GT_ADD_ISBNS
Again, all your updates are single row operations instead of set ones.
"Is there any way to make this process faster in the stored procedure itself by doing batch process?"
Without knowing your rules and the context we cannot rewrite your logic for you, but 15-16 hours is way too long for this so you can definitely reduce the elapsed time.
Things to consider:
Replace the writing and reading to the temporary table with the query you use to populate it
Rewrite the loops to use BULK COLLECT with a high LIMIT (e.g. 1000) to improve the select efficiency. Find out more.
Populate arrays and use FORALL to improve the efficiency of the updates. Find out more.
Try to remove all those individual look-ups by incorporating the logic into the main query, using OUTER JOIN syntax to test for existence.
These are all guesses. If you really want to know where the procedure is spending the time - and that knowledge is the root of all successful tuning, so you ought to want to know - you should run the procedure under a PL/SQL Profiler. This will tell you which lines cost the most time, and those are usually the ones where you need to focus your tuning effort. If you don't already have access to DBMS_PROFILER you will need a DBA to run the install script for you. Find out more.
" can we change this code into Java and run this code in multi threaded mode?"
Given that one of the reasons for slowing down the procedure is the I/O cost of selecting from the temporary table there's a good chance multi-threading might introduce further contention and actually make things worse. You should seek to improve the stored procedure first.

PL/SQL "WHERE CURRENT OF" vs "ROWID"

Why Oracle made "where current of" syntax when you can use "rowid"? Example:
BEGIN
FOR rec IN (SELECT t.column1, t.rowid rid FROM test_table) LOOP
UPDATE test_table tb SET column1 = some_function(rec.column1) WHERE tb.rowid = rec.rid;
END LOOP;
COMMIT;
END;
DECLARE
CURSOR cur IS SELECT t.column1 FROM test_table;
param1 test_table.column1%TYPE;
BEGIN
LOOP
FETCH cur INTO param1;
UPDATE test_table tb SET tb.column1 = some_function(param1) WHERE CURRENT OF cur;
EXIT WHEN cur%NOTFOUND;
END LOOP;
COMMIT;
END;
Where Current Of is used to identify LAST FETCHED ROW in cursor. It's more safe, because You have 100% confidence, that f.e. You updating LAST FETCHED ROW from curosr. With Rowids there's danger, because it's really easy to mess up something.
Using WHERE CURRENT OF clause without having FOR UPDATE clause mentioned in your SELECT statement could be risky. Reason behind that is when you are not applying FOR UPDATE clause then you are not exclusively placing Row level lock to those rows which you are intending to update in the following UPDATE DML. And hence it opens an opportunity for outside world of violating data consistency i.e. some other user from different session may be looking to UPDATE same rows of your targeted table.
Also, in you learn more about WHERE CURRENT OF clause you will notice that during this clause Oracle internally makes use of ROWID's only to reach/identify the rows which needs to be updated.
Hope it helps !! Happy Programming

Delete duplicated records using procedure in Oracle/PLSQL

As the title, I wanna create a procedure in Oracle/PLSQL to delete rows which share same values in some columns. I know how to implement it using Query, but how to do it using procedure? Do I have to use any loop? I am very new to PLSQL
Please help, thank you a lot!
If you want a simple procedure to delete from a particular table you can use the below piece of code:
CREATE OR REPLACE PROCEDURE DELETE_DUPLICATE AS
BEGIN
FOR I IN (SELECT TAB.A, TAB.B, MIN(ROWID) RID
FROM DUPLICATE_TABLE TAB
GROUP BY TAB.A, TAB.B
HAVING COUNT(*) > 1) LOOP
DELETE FROM DUPLICATE_TABLE TAB
WHERE I.RID <> TAB.ROWID
AND TAB.A = I.A
AND TAB.B = I.B;
COMMIT;
END LOOP;
END;
Here DUPLICATE_TABLE is the table having duplicate values. We are deleting rows having same values in columns A and B.
Hey. As per your question, although it is not advicable to create
procedure for this simpler task which can be easily done via Pure SQL.
But if its really imp to make it as a stored procedure then i would
suggest to use PURE SQL logic than using any kind of loop as there
will be Context Switching which will have a toll on the database.
Below is a snippet which i think will be useful also incorporated
Analytical function to suffice your issue. Let me know if it helps.
CREATE OR REPLACE PROCEDURE Dup_DELETE
AS
BEGIN
DELETE
FROM EMP
WHERE EMP.ROWID IN
-- Assuming that i am trying to segregate the duplicate values on Empno and ename
(SELECT A.ROWID
FROM
(SELECT ROW_NUMBER() OVER(PARTITION BY EMPNO,ENAME ORDER BY JOB DESC) RNK,
empno,
ename,
rowid
FROM EMP
)A
WHERE A.RNK <> 1
);
END;
Just put your SQL statement in a procedure. There's no rule that says you have to change the approach because it's PL/SQL. For example,
create or replace procedure dedupe_sometable
as
begin
delete sometable
where rowid in
( select lag(rowid) over (partition by id order by null)
from sometable );
end dedupe_sometable;
Add logging etc as needed.
(Ideally this would be within a package and not a standalone procedure.)
If you know how to do it in SQL, better to do it in sql. PL/SQL should be used only when you cannot write specific task in SQL statement or if there is performance issues in the query and can improve by writing the logic in PL/SQL (second scenario is very rare).
If you want to write PL/SQL procedure to parameterize so that any table can be passed to delete the duplicates from it, then it makes sense. You need to dynamically generate delete statement in the procedure and execute using execute immediate.
If your intention is to learn PL/SQL, then it is programming language and you need to spend some time as if you are learning new programming language.
It is not recommended to use plsql for something that can be done using plain sql.
Whenever you have a combination of sql and plsql, you are switching between sql and plsql engine. So it does not make sense to incur this overhead without proper requirement.
If for some reason there is still a need for doing this, you can atleast implement bulk delete to reduce some overhead. Please refer to the code below to find out how to do that -
DECLARE
TYPE t_del IS TABLE OF VARCHAR2(100);
l_del t_del;
CURSOR c IS
SELECT MIN(ROWID) RID
FROM test_tbl TAB
GROUP BY TAB.age, TAB.gender
HAVING COUNT(*) > 1;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_del;
EXIT WHEN l_del.COUNT = 0;
FORALL i IN l_del.FIRST..l_del.last
DELETE FROM test_tbl WHERE ROWID = l_del(i);
END LOOP;
END;

DDL statements in PL/SQL?

I am trying the code below to create a table in PL/SQL:
DECLARE
V_NAME VARCHAR2(20);
BEGIN
EXECUTE IMMEDIATE 'CREATE TABLE TEMP(NAME VARCHAR(20))';
EXECUTE IMMEDIATE 'INSERT INTO TEMP VALUES(''XYZ'')';
SELECT NAME INTO V_NAME FROM TEMP;
END;
/
The SELECT statement fails with this error:
PL/SQL: ORA-00942: table or view does not exist
Is it possible to CREATE, INSERT and SELECT all in a single PL/SQL Block one after other?
I assume you're doing something like the following:
declare
v_temp varchar2(20);
begin
execute immediate 'create table temp(name varchar(20))';
execute immediate 'insert into temp values(''XYZ'')';
select name into v_name from temp;
end;
At compile time the table, TEMP, does not exist. It hasn't been created yet. As it doesn't exist you can't select from it; you therefore also have to do the SELECT dynamically. There isn't actually any need to do a SELECT in this particular situation though you can use the returning into syntax.
declare
v_temp varchar2(20)
begin
execute immediate 'create table temp(name varchar2(20))';
execute immediate 'insert into temp
values(''XYZ'')
returning name into :1'
returning into v_temp;
end;
However, needing to dynamically create tables is normally an indication of a badly designed schema. It shouldn't really be necessary.
I can recommend René Nyffenegger's post "Why is dynamic SQL bad?" for reasons why you should avoid dynamic SQL, if at all possible, from a performance standpoint. Please also be aware that you are much more open to SQL injection and should use bind variables and DBMS_ASSERT to help guard against it.
If you run the program multiple time you will get an error even after modifying the program to run the select statement as dynamic SQL or using a returning into clause.
Because when you run the program first time it will create the table without any issue but when you run it next time as the table already created first time and you don't have a drop statement it will cause an error: "Table already exists in the Database".
So my suggestion is before creating a table in a pl/sql program always check if there is any table with the same name already exists in the database or not. You can do this check using a Data dictionary views /system tables which store the metadata depending on your database type.
For Example in Oracle you can use following views to decide if a tables needs to be created or not:
DBA_TABLES ,
ALL_TABLES,
USER_TABLES

Resources