Creating INSERT-Statements to export the data from every exisiting table - oracle

I am currently not able to get a proper database dump, because the DB runs on a remote server inside a closed system --> no remote copying possible, only way to get files in/out is by being physically present at the servers location or via e-Mail (but I can't send a several GB big dump via mail...).
However, I still need the data in order to import it into my dev system.
I figure the best way of doing this is by creating INSERT statements that contain the needed information.
The SQL-Developer software can actually do this, but apparently it only works for one table at a time. As soon as one selects multiple tables the respective option disappears from the right-click-menu and one can only export the DDL statements :-/
So this approach is not really viable for me, as there are hundreds of tables...
Does anyone know of a standardized way to create INSERT statements via the querying of metadata tables (user_tables, user_columns, ...)? I could imagine that it might be possible to create all the statements by cleverly joining those meta tables. However, before dumping several hours into this approach, I'd appreciate if someone can confirm this suspicion first.
Also someone else must have had this problem before, so I hope that some of you may be able to give me a hint on other approaches. Thanks in advance!

My answer isn't full solution.
1) To extract DDL use.
select table_name,dbms_metadata.get_ddl(OBJECT_TYPE=>'TABLE', NAME=>table_name) from user_tables;
2) To extract record from table use xmltype (refucursor) and dbms_xmlstore to insert them.
Below only suggestion how to do this.
create table test as select level as "LP" from dual connect by level < 100;
declare
v_cursor sys_refcursor;
xmlDoc xmltype;
curid NUMBER;
insCtx DBMS_XMLSTORE.ctxType;
rows NUMBER;
begin
open v_cursor for 'select * from test';
xmlDoc := xmltype(v_cursor);
close v_cursor;
dbms_output.put_line(xmlDoc.getClobVal()); -- extracted row into xml.
insCtx := DBMS_XMLSTORE.newContext('test');
DBMS_XMLSTORE.clearUpdateColumnList(insCtx);
rows := DBMS_XMLSTORE.insertXML(insCtx, xmlDoc);
dbms_output.put_line('ROWS inserted' || rows);
DBMS_XMLSTORE.closeContext(insCtx);
commit;
end;

Related

How to rename multiple stored procedures in Oracle

I'm using Oracle 12C, now I'm in a following trouble:
I have multiple stored procedures like:
schema.TEST1, schema.TEST2, schema.TEST3....
Now, I want to rename all of them to schema.TEST01, schema.TEST02, schema.TEST03...or any name I want which was formatted before, this is for backup.
In Oracle, I can't rename a stored procedure using a ALTER statement rename like SQL. How can I do this with one click?
Thanks!
Make changes according to your schema and naming convention.
But it is nonsense, you do not need to backup in that way.
But I took it as a challenge and would like to present you the below code
Use CLOB if source text is large enough.
DECLARE
type names_table is table of VARCHAR2(50);
names names_table;
TYPE source_txt_table is TABLE OF VARCHAR2(32767);
source_txt source_txt_table;
header VARCHAR2(32767);
final_sourc_txt VARCHAR2(32767);
BEGIN
SELECT OBJECT_NAME bulk COLLECT into names from user_procedures WHERE object_type = 'PROCEDURE' AND OBJECT_NAME IN ('DO_SOMETHING_1','DO_SOMETHING_2');
FOR i in 1..names.LAST
LOOP
SELECT text bulk COLLECT into source_txt
FROM all_source
WHERE name = names(i)
ORDER BY line;
source_txt(1) := 'CREATE OR REPLACE '||source_txt(1);
header := REGEXP_REPLACE(upper(source_txt(1)), names(i), 'HR.'||names(i)||'_bck'); --make changes according to new naming convention
source_txt(1) := header;
FOR j in 1..source_txt.LAST
LOOP
final_sourc_txt := final_sourc_txt||source_txt(j);
END LOOP;
EXECUTE IMMEDIATE final_sourc_txt;
dbms_output.put_line('Success: '|| names(i));
final_sourc_txt := NULL;
header := NULL;
source_txt := NULL;
END LOOP;
END;
For backup? That's rather poorly chosen backup system.
what if database dies because of disk failure? You'll lose everything (including your "backup" procedures)
how many "backups" do you plan to keep? For example, one of my schemas contains 643 procedures/functions/packages. With two backups, I'm already close to 2K objects. If you perform backup regularly (e.g. daily), in a matter of only a month, I'd be close to 20K objects. I really wouldn't want to do that
Therefore, why wouldn't you consider something else? For example,
version control system (such as Git)
perform Data Pump Export as a "logical" backup
let DBA take care about RMAN backup
if you want to do it manually, some GUI tools (such as TOAD) let you select all and create script - that option stores source code as files on your hard disk drive, and then you can backup those files somewhere else (burn them on a DVD, copy to USB memory stick, another hard disk drive, somewhere within your network ...)
Finally, to answer your question: how to do what you asked for in one click? As far as I can tell, you can't. You'd first have to write a procedure which would do the job, but then you're back to my second objection to your approach. How will that procedure know that proc1 is "original", while proc01 is a backup version? Why wouldn't someone name their procedures proc05 initially? That's a valid name.
You can also try using DBMS_METADATA PACKAGE to export DDLs of the schema object.
I have written an example, you can use it after modifying it according to your needs.
CREATE DIRECTORY EXTERNAL AS '/external/';
DECLARE
h PLS_INTEGER;
th PLS_INTEGER;
fh utl_file.file_type;
ddls CLOB;
SYSD VARCHAR2(50);
BEGIN
h := dbms_metadata.open('PROCEDURE');
DBMS_METADATA.set_filter(h, 'SCHEMA','HR');
th := DBMS_METADATA.ADD_TRANSFORM (h, 'DDL');
DBMS_METADATA.SET_COUNT(h, 50);
ddls := dbms_metadata.fetch_clob(h);
SELECT TO_CHAR(SYSDATE, 'YYYYMMDDHHMISS') INTO SYSD FROM dual;
fh := utl_file.fopen('EXTERNAL', 'SCHEMA_BCK_'||SYSD||'.bck', 'w');
utl_file.put(fh, ddls);
UTL_FILE.FCLOSE(fh);
DBMS_METADATA.CLOSE(h);
END;
It is far safer against database failures and you will not unnecessarily populate your database schema with backup objects.

how to run the stored procedure in batch mode or in run it in parallel processing

We are iterating 100k+ records from global temporary table.below stored procedure will iterate all records from glogal temp table one by one and has to process below three steps.
to see whether product is exists or not
to see whether product inside the assets are having the 'category' or not.
to see whether the assets are having file names starts with '%pdf%' or not.
So each record has to process these 3 steps and final document names will be stored in the table for the successful record. If any error comes in any of the steps then error message will be stored for that record.
Below stored procedure is taking long time to process Because its processing sequentially.
Is there any way to make this process faster in the stored procedure itself by doing batch process?
If it's not possible in stored procedure then can we change this code into Java and run this code in multi threaded mode? like creating 10 threads and each thread will take one record concurrently and process this code. I would be happy if somebody gives some pseudo code.
which approach is going to suggest?
DECLARE
V_NODE_ID VARCHAR2(20);
V_FILENAME VARCHAR2(100);
V_CATEGORY_COUNT INTEGER :=0;
FINAL_FILNAME VARCHAR2(2000);
V_FINAL_ERRORMESSAGE VARCHAR2(2000);
CURSOR C1 IS
SELECT isbn FROM GT_ADD_ISBNS GT;
CURSOR C2(v_isbn in varchar2) IS
SELECT ANP.NODE_ID NODE_ID
FROM
table1 ANP,
table2 ANPP,
table3 AN
WHERE
ANP.NODE_ID=AN.ID AND
ANPP.NODE_ID=ANP.NODE_ID AND
AN.NAME_ID =26 AND
ANP.CATEORGY='category' AND
ANP.QNAME_ID='categories' AND
ANP.NODE_ID IN(SELECT CHILD_NODE_ID
FROM TABLE_ASSOC START WITH PARENT_NODE_ID IN(v_isbn)
CONNECT BY PRIOR CHILD_NODE_ID = PARENT_NODE_ID);
BEGIN
--Iterating all Products
FOR R1 IN C1
LOOP
FINAL_FILNAME :='';
BEGIN
--To check whether Product is exists or not
SELECT AN.ID INTO V_NODE_ID
FROM TABLE1 AN,
TABLE2 ANP
WHERE
AN.ID=ANP.NODE_ID AND
ANP.VALUE in(R1.ISBN);
V_CATEGORY_COUNT :=0;
V_FINAL_ERRORMESSAGE :='';
--To check Whether Product inside the assets are having the 'category' is applied or not
FOR R2 IN C2(R1.ISBN)
LOOP
V_CATEGORY_COUNT := V_CATEGORY_COUNT+1;
BEGIN
--In this Logic Product inside the assets have applied the 'category' But those assets are having documents LIKE '%pdf%' or not
SELECT ANP.STRING_VALUE into V_FILENAME
FROM
table1 ANP,
table2 ANPP,
table3 ACD
WHERE
ANP.QNAME_ID=21 AND
ACD.ID=ANPP.LONG_VALUE
ANP.NODE_ID=ANPP.NODE_ID AND
ANPP.QNAME_ID=36 AND
ANP.STRING_VALUE LIKE '%pdf%' AND
ANP.NODE_ID=R2.NODE_ID;
FINAL_FILNAME := FINAL_FILNAME || V_FILENAME ||',';
EXCEPTION WHEN
NO_DATA_FOUND THEN
V_FINAL_ERRORMESSAGE:=V_FINAL_ERRORMESSAGE|| 'Category is applied for this Product But for the asset:'|| R2.NODE_ID || ':Documents[LIKE %pdf%] were not found ;';
UPDATE GT_ADD_ISBNS SET ERROR_MESSAGE= V_FINAL_ERRORMESSAGE WHERE ISBN= R1.ISBN;
END;--Iterating for each NODEID
END LOOP;--Iterating the assets[Nodes] for each product of catgeory
-- DBMS_OUTPUT.PUT_LINE('R1.ISBN:' || R1.ISBN ||'::V_CATEGORY_COUNT:' || V_CATEGORY_COUNT);
IF(V_CATEGORY_COUNT = 0) THEN
UPDATE GT_ADD_ISBNS SET ERROR_MESSAGE= 'Category is not applied to none of the Assets for this Product' WHERE ISBN= R1.ISBN;
END IF;
EXCEPTION WHEN
NO_DATA_FOUND THEN
UPDATE GT_ADD_ISBNS SET ERROR_MESSAGE= 'Product is not Found:' WHERE ISBN= R1.ISBN;
END;
-- DBMS_OUTPUT.PUT_LINE( R1.ISBN || 'Final documents:'||FINAL_FILNAME);
UPDATE GT_ADD_ISBNS SET FILENAME=FINAL_FILNAME WHERE ISBN= R1.ISBN;
COMMIT;
END LOOP;--looping gt_isbns
END;
You have a number of potential performance hits. Here's one:
"We are iterating 100k+ records from global temporary table"
Global temporary tables can be pretty slow. Populating them means writing all that data to disk; reading from them means reading from disk. That's a lot of I/O which might be avoidable. Also, GTTs use the temporary tablespace so you may be in contention with other sessions doing large sorts.
Here's another red flag:
FOR R1 IN C1 LOOP
... FOR R2 IN C2(R1.ISBN) LOOP
SQL is a set-based language. It is optimised for joining tables and returning sets of data in a highly-performative fashion. Nested cursor loops mean row-by-row processing which is undoubtedly easier to code but may be orders of magnitude slower than the equivalent set operation would be.
--To check whether Product is exists or not
You have several queries selecting from the same tables (AN, 'ANP) using the same criteria (isbn`). Perhaps all these duplicates are the only way of validating your business rules but it seems unlikely.
FINAL_FILNAME := FINAL_FILNAME || V_FILENAME ||',';
Maybe you could rewrite your query to use listagg() instead of using procedural logic to concatenate a string?
UPDATE GT_ADD_ISBNS
Again, all your updates are single row operations instead of set ones.
"Is there any way to make this process faster in the stored procedure itself by doing batch process?"
Without knowing your rules and the context we cannot rewrite your logic for you, but 15-16 hours is way too long for this so you can definitely reduce the elapsed time.
Things to consider:
Replace the writing and reading to the temporary table with the query you use to populate it
Rewrite the loops to use BULK COLLECT with a high LIMIT (e.g. 1000) to improve the select efficiency. Find out more.
Populate arrays and use FORALL to improve the efficiency of the updates. Find out more.
Try to remove all those individual look-ups by incorporating the logic into the main query, using OUTER JOIN syntax to test for existence.
These are all guesses. If you really want to know where the procedure is spending the time - and that knowledge is the root of all successful tuning, so you ought to want to know - you should run the procedure under a PL/SQL Profiler. This will tell you which lines cost the most time, and those are usually the ones where you need to focus your tuning effort. If you don't already have access to DBMS_PROFILER you will need a DBA to run the install script for you. Find out more.
" can we change this code into Java and run this code in multi threaded mode?"
Given that one of the reasons for slowing down the procedure is the I/O cost of selecting from the temporary table there's a good chance multi-threading might introduce further contention and actually make things worse. You should seek to improve the stored procedure first.

Oracle Temporary Table to convert a Long Raw to Blob

Questions have been asked in the past that seems to handle pieces of my full question, but I'm not finding a totally good answer.
Here is the situation:
I'm importing data from an old, but operational and production, Oracle server.
One of the columns is created as LONG RAW.
I will not be able to convert the table to a BLOB.
I would like to use a global temporary table to pull out data each time I call to the server.
This feels like a good answer, from here: How to create a temporary table in Oracle
CREATE GLOBAL TEMPORARY TABLE newtable
ON COMMIT PRESERVE ROWS
AS SELECT
MYID, TO_LOB("LONGRAWDATA") "BLOBDATA"
FROM oldtable WHERE .....;
I do not want the table hanging around, and I'd only do a chunk of rows at a time, to pull out the old table in pieces, each time killing the table. Is it acceptable behavior to do the CREATE, then do the SELECT, then DROP?
Thanks..
--- EDIT ---
Just as a follow up, I decided to take an even different approach to this.
Branching the strong-oracle package, I was able to do what I originally hoped to do, which was to pull the data directly from the table without doing a conversion.
Here is the issue I've posted. If I am allowed to publish my code to a branch, I'll post a follow up here for completeness.
Oracle ODBC Driver Release 11.2.0.1.0 says that Prefetch for LONG RAW data types is supported, which is true.
One caveat is that LONG RAW can technically be up to 2GB in size. I had to set a hard max size of 10MB in the code, which is adequate for my use, so far at least. This probably could be a variable sent in to the connection.
This fix is a bit off original topic now however, but it might be useful to someone else.
With Oracle GTTs, it is not be necessary to drop and create each time, and you don't need to worry about data "hanging around." In fact, it's inadvisable to drop and re-create. The structure itself persists, but the data in it does not. The data only persists within each session. You can test this by opening up two separate clients, loading data with one, and you will notice it's not there in the second client.
In effect, each time you open a session, it's like you are reading a completely different table, which was just truncated.
If you want to empty the table within your stored procedure, you can always truncate it. Within a stored proc, you will need to execute immediate if you do this.
This is really handy, but it also can make debugging a bear if you are implementing GTTs through code.
Out of curiosity, why a chunk at a time and not the entire dataset? What kind of data volumes are you talking about?
-- EDIT --
Per our comments conversation, this is very raw and untested, but I hope it will give you an idea what I mean:
CREATE OR REPLACE PROCEDURE LOAD_DATA()
AS
TOTAL_ROWS number;
TABLE_ROWS number := 1;
ROWS_AT_A_TIME number := 100;
BEGIN
select count (*)
into TOTAL_ROWS
from oldtable;
WHILE TABLE_ROWS <= TOTAL_ROWS
LOOP
execute immediate 'truncate table MY_TEMP_TABLE';
insert into MY_TEMP_TABLE
SELECT
MYID, TO_LOB(LONGRAWDATA) as BLOBDATA
FROM oldtable
WHERE ROWNUM between TABLE_ROWS and TABLE_ROWS + ROWS_AT_A_TIME;
insert into MY_REMOTE_TABLE#MY_REMOTE_SERVER
select * from MY_TEMP_TABLE;
commit;
TABLE_ROWS := TABLE_ROWS + ROWS_AT_A_TIME;
END LOOP;
commit;
end LOAD_DATA;

Oracle multiple cursors with the same "WHERE" clause

I need to write a script with an embedded PL/SQL that opens a few cursors and loops through records fetched performing some actions.
All cursors have part of their WHERE a condition that look like this: AND table_name IN ('table_one', 'table_two' ... 'last_table_from_a_long_list')
Because the same list of tables is used everywhere I would prefer not to repeat myself but rather to declare the list of tables somewhere and reuse it in the each cursor definition. Ideally you would put those table names in a real table and everything would work OK; however that script will run in production and creating/dropping tables for this purpose is not an accepted practice. The same restriction would apply about creating a user defined type of table_name and another one as a TABLE OF table_name.
Just wondering if there are some PL/SQL tricks that would help me achieve the same thing. Repeating the long list of tables a dozen of times would make that code look quite bad. This said my script is an "one off" thing so maybe I am worrying too much. There is still the professional "how to" curiosity though.
Thank you in advance for your inputs.
Short answer: repeat the code.
Longer answer: not repeating the code is more work than just biting the bullet and dealing with a little less-than-perfectly elegant code.
Longest answer: if you really need to insist on doing this then look into using dynamically-created SQL - build your SELECT statement(s) out of strings, then either use the OPEN cursor FOR sqlString, FETCH cursor INTO..., and CLOSE cursor statements, or for extra karma and a pile of difficulty study up on the DBMS_SQL package - and may C͡͏ţ͘húl͞ḩ̸u҉͝ have mercy upon your soul.
Best of luck.
Couple of options you can try
EXECUTE IMMEDIATE
-- Cursor Example
sql_stmt := 'SELECT * FROM emp WHERE empno = :id';
EXECUTE IMMEDIATE sql_stmt INTO emp_rec USING emp_id;
Another option is DBMS_SQL
CREATE OR REPLACE PROCEDURE DYNSQL AS
cur integer;
rc integer;
BEGIN
cur := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(cur, 'CREATE TABLE X (Y DATE)', DBMS_SQL.NATIVE);
rc := DBMS_SQL.EXECUTE(cur);
DBMS_SQL.CLOSE_CURSOR(cur);
END;
You can google with these key words and you will find many examples.

Write Oracle Procedure To Accept List of Items used in Select

There have been a couple of hints that seem to have gotten me close here, but with some unique issues, I'm hoping this question is distinguishing enough to merit its own posting.
For starters here's what I have. I have an Oracle procedure that returns a standard REF CURSOR, and this REF CURSOR is passed back to my application. The REF CURSOR is a list of lookup IDs.
I then want to take this list and bring it to another data store and use it in a select statement. It will absolutely be possible to accomplish this by looping through the REF CURSOR, but I'm hoping to avoid that. I would much rather be able to write a SELECT...WHERE lookup_id IN result_ref_cursor OR SELECT...WHERE EXISTS...
First is this possible or should I just try a less than elegant solution? If it is possible, any hints as to where I should get started looking?
I'm relatively new to Oracle, but fairly experienced in RDBMs in general, so feel free to just through some links at me and I can study up. Much appreciated
Why kurosch didn't put his response as an "answer" I'll have no idea.
So, what you do is define a SQL type which describes one row of the output of the ref cursor, and also a SQL type which is a table of the previous. Then, you'll create a pipelined function which returns the rows returned by the ref cursor. This function can then be used in a standard SQL. I'm borrowing from Ask Tom on this one.
create or replace type myLookupId as object ( id int)
/
create or replace type myLookupIdTable as table of myLookupId
/
create or replace function f return myLookupIdTable PIPELINED is
l_data myLookupId;
l_id number;
p_cursor SYS_REFCURSOR;
begin
p_cursor := function_returning_ref_cursor();
loop
fetch p_cursor into l_id;
exit when p_cursor%notfound;
l_data := myLookupId( l_id );
pipe row (l_data);
end loop;
return;
end;
/
And now a sample query...
SELECT *
FROM SOME_TABLE
WHERE lookup_id in (SELECT ID FROM table(f));
Sorry if the code isn't exactly right, I don't have the DB to test right now.
There are several directions you could go with this, but I did a search on the specific solution you want and it seems like no-one has done it often enough to show up there. What you can do is search the oracle metalink - that is usually really good at finding obscure answers. (Though you do need a service agreement - just found out that mine expired :( )
Other possible solutions:
Create a link between the data stores so that you can do the select in the plsql directly
Create a function in Java that loops through it for you to create the string for the query. This will look a little more pretty at least.
Otherwise, REF CURSOR's need to go back and forth - I don't know how you can pipe the results of the REF CURSOR in one connection to the query in another without looping through it.

Resources