How to rename multiple stored procedures in Oracle - oracle

I'm using Oracle 12C, now I'm in a following trouble:
I have multiple stored procedures like:
schema.TEST1, schema.TEST2, schema.TEST3....
Now, I want to rename all of them to schema.TEST01, schema.TEST02, schema.TEST03...or any name I want which was formatted before, this is for backup.
In Oracle, I can't rename a stored procedure using a ALTER statement rename like SQL. How can I do this with one click?
Thanks!

Make changes according to your schema and naming convention.
But it is nonsense, you do not need to backup in that way.
But I took it as a challenge and would like to present you the below code
Use CLOB if source text is large enough.
DECLARE
type names_table is table of VARCHAR2(50);
names names_table;
TYPE source_txt_table is TABLE OF VARCHAR2(32767);
source_txt source_txt_table;
header VARCHAR2(32767);
final_sourc_txt VARCHAR2(32767);
BEGIN
SELECT OBJECT_NAME bulk COLLECT into names from user_procedures WHERE object_type = 'PROCEDURE' AND OBJECT_NAME IN ('DO_SOMETHING_1','DO_SOMETHING_2');
FOR i in 1..names.LAST
LOOP
SELECT text bulk COLLECT into source_txt
FROM all_source
WHERE name = names(i)
ORDER BY line;
source_txt(1) := 'CREATE OR REPLACE '||source_txt(1);
header := REGEXP_REPLACE(upper(source_txt(1)), names(i), 'HR.'||names(i)||'_bck'); --make changes according to new naming convention
source_txt(1) := header;
FOR j in 1..source_txt.LAST
LOOP
final_sourc_txt := final_sourc_txt||source_txt(j);
END LOOP;
EXECUTE IMMEDIATE final_sourc_txt;
dbms_output.put_line('Success: '|| names(i));
final_sourc_txt := NULL;
header := NULL;
source_txt := NULL;
END LOOP;
END;

For backup? That's rather poorly chosen backup system.
what if database dies because of disk failure? You'll lose everything (including your "backup" procedures)
how many "backups" do you plan to keep? For example, one of my schemas contains 643 procedures/functions/packages. With two backups, I'm already close to 2K objects. If you perform backup regularly (e.g. daily), in a matter of only a month, I'd be close to 20K objects. I really wouldn't want to do that
Therefore, why wouldn't you consider something else? For example,
version control system (such as Git)
perform Data Pump Export as a "logical" backup
let DBA take care about RMAN backup
if you want to do it manually, some GUI tools (such as TOAD) let you select all and create script - that option stores source code as files on your hard disk drive, and then you can backup those files somewhere else (burn them on a DVD, copy to USB memory stick, another hard disk drive, somewhere within your network ...)
Finally, to answer your question: how to do what you asked for in one click? As far as I can tell, you can't. You'd first have to write a procedure which would do the job, but then you're back to my second objection to your approach. How will that procedure know that proc1 is "original", while proc01 is a backup version? Why wouldn't someone name their procedures proc05 initially? That's a valid name.

You can also try using DBMS_METADATA PACKAGE to export DDLs of the schema object.
I have written an example, you can use it after modifying it according to your needs.
CREATE DIRECTORY EXTERNAL AS '/external/';
DECLARE
h PLS_INTEGER;
th PLS_INTEGER;
fh utl_file.file_type;
ddls CLOB;
SYSD VARCHAR2(50);
BEGIN
h := dbms_metadata.open('PROCEDURE');
DBMS_METADATA.set_filter(h, 'SCHEMA','HR');
th := DBMS_METADATA.ADD_TRANSFORM (h, 'DDL');
DBMS_METADATA.SET_COUNT(h, 50);
ddls := dbms_metadata.fetch_clob(h);
SELECT TO_CHAR(SYSDATE, 'YYYYMMDDHHMISS') INTO SYSD FROM dual;
fh := utl_file.fopen('EXTERNAL', 'SCHEMA_BCK_'||SYSD||'.bck', 'w');
utl_file.put(fh, ddls);
UTL_FILE.FCLOSE(fh);
DBMS_METADATA.CLOSE(h);
END;
It is far safer against database failures and you will not unnecessarily populate your database schema with backup objects.

Related

Partially consuming a cursor in multiple pl/sql calls without defining it in package spec

I have a large source data set (a few million rows) that requires complex processing, resulting in much larger amount of data, which should be then offloaded and stored as files. The storage requires dividing up resulting data based on certain parameters, namely N source rows that meet certain criteria.
Since it's possible to compute the said parameters within PL/SQL, it was decided that the most efficient way would be to create a package, specify a spec-level cursor for source rows in it, then write a procedure that would partially consume the opened cursor until the criteria is meet and fill temporary tables with resulting data, which would then be offloaded, and the procedure would be called again, repeating until there's no more source rows. PL/SQL basically looks like this:
create or replace PACKAGE BODY generator as
cursor glob_cur_body(cNo number) is
select *
from source_table
where no = cNo
order by conditions;
procedure close_cur is
begin
if glob_cur_body%isopen then
close glob_cur_body;
end if;
end close_cur;
procedure open_cur(pNo number) is
begin
close_cur;
open glob_cur_body(pNo);
end open_cur;
function consume_cur return varchar2 is
v source_table%rowtype;
part_id varchar2(100);
begin
fetch glob_cur_body into v;
if glob_cur_body%notfound then
return null;
end if;
--Clear temporary tables
--Do the processing until criteria is meet of there's no more rows
--Fill the temporary tables and part_id
return part_id;
end consume_cur;
end generator;
And the consumer is doing the following (in pseudocode)
generator.open_cur;
part_id = generator.consume;
while ( part_id != null )
{
//offload data from temp tables
part_id = generator.consume;
}
generator.close_cur;
It's working fine, but unfortunately there's one problem: a spec-level cursor makes the package stateful, meaning that its recompilation results in ORA-04068 for sessions that already accessed it before. It makes maintenance cumbersome, because there's a lot more to the package besides said functions, and it's actively used for unrelated purposes.
So, I want to get rid of the spec-level cursor, but I'm not sure if that's possible. Some ideas I've already discarded:
Re-opening the cursor and skipping N rows: terrible performance, unreliable because affected by any changes to data made between opens
Fetching the source cursor into plsql table: size too large.
Filling up the entire unload tables at once, splitting them later: size too large, subpar performance.
Opening the cursor as refcursor and storing refcursor variable in a dedicated package: impossible, as pl/sql doesn't allow sys_refcursor variables at spec levels
Having open_cur procedures return refcursor, storing it in the offloader, and then somehow passing it to consume_cur: looked viable, but the offloader is in Java, and JDBC doesn't allow binding of SYS_REFCURSOR parameters.
Changing consume_cur to pipelined function: could have worked, but oracle buffers pipelined rows, meaning it would execute multiple times when fetching data from it row-by-row. Also counterintuitive.
Only other idea I've had so far is to make a dedicated package storing said cursor, having open and close procedures and get_cursor returning refcursor; then call get_cursor from generator.consume_cur. That would make the dedicated package (which is unlikely to change) stateful and main package stateless. However, it seems like a half-baked patch rather than a problem solution. Is there a more decent way of achieving what I need? Perhaps changing the logic completely without affecting performance and storage limits too much.
I have a problem to understand your question. But I can provide clarification for your ideas.
Opening the cursor as refcursor and storing refcursor variable in a
dedicated package: impossible, as pl/sql doesn't allow sys_refcursor
variables at spec levels
The workaround with dbms_sql.
create table test_rows as (select level rr from dual connect by level <= 100);
create or replace package cursor_ctx is
ctx_number integer;
end;
declare
p_cursor sys_refcursor;
begin
open p_cursor for 'select rr from test_rows';
cursor_ctx.ctx_number := DBMS_SQL.TO_CURSOR_NUMBER(p_cursor);
end;
This part consuming is data from the cursor.
declare
p_cursor sys_refcursor;
type l_number is table of number;
v_numbers l_number;
begin
if DBMS_SQL.IS_OPEN(cursor_ctx.ctx_number) then
p_cursor := DBMS_SQL.TO_REFCURSOR( cursor_ctx.ctx_number);
fetch p_cursor bulk collect into v_numbers limit 10;
if v_numbers.count < 10 then
dbms_output.put_line('No more data, close cursor');
close p_cursor;
cursor_ctx.ctx_number := null;
else
cursor_ctx.ctx_number := DBMS_SQL.TO_CURSOR_NUMBER(p_cursor);
end if;
for i in nvl(v_numbers.first,1) .. nvl(v_numbers.last,-1) loop
dbms_output.put_line(v_numbers(i));
end loop;
else
dbms_output.put_line('Null or cursor close ');
end if;
end;
Pipelined function has future to split input cursor into chunk. Parallel Enabled Pipelined Table Functions
JDBC allows using sys_refcursor as an output parameter. sys_refcursor = ResultSet.

PL/SQL script for export database

I want to export all the data from a database like functions, procedures, views, triggers for which an user has owner privilegies. I know that SQL Developer has the option for exporting databases in sql file, but I want to do this from the code. When I run the code I want to create a file with .sql extension which must contain all data from the database. First of all, I want to know if that is possible, and if it is, can anyone tell me some hints for doing this?
I started with making a file:
CREATE DIRECTORY test_dir AS 'H:\';
DECLARE
out_File UTL_FILE.FILE_TYPE;
BEGIN
out_File := UTL_FILE.FOPEN('test_dir', 'test.sql' , 'W');
UTL_FILE.PUT_LINE(out_file , 'here will be the database export');
UTL_FILE.FCLOSE(out_file);
END;
The Oracle export/import utilities will export/import database objects in user mode (use OWNER parameter) using a binary file format
http://docs.oracle.com/cd/B28359_01/server.111/b28319/exp_imp.htm
However, if you want to export the metadata for your objects and data into sql files, you can query Oracle's catalog users views like user_objects -> for a user's database objects user_tables -> tables, user_constraints -> constraints (even user_source to get text for compiled objects). If you intend your .sql files to re-create objects and data, you are in for some heavy lifting, especially if you want your scripts to be portable. In that case I would look 3rd party tools.
I guess this should be the answer for what you are looking for. Hope it helps. This is just an example for what you are asking. Other types can also be incorporated as required.
spool on;
spool <target path>/filename;
SET SERVEROUTPUT ON SIZE UNLIMITED;
BEGIN
FOR I IN
(SELECT al.*
FROM ALL_SOURCE al
WHERE OWNER = 'AVROY'
ORDER BY name,
line
)
LOOP
dbms_output.put_line(I.TEXT);
END LOOP;
FOR I IN
(SELECT * FROM ALL_TABLES WHERE OWNER = 'AVROY'
)
LOOP
BEGIN
dbms_output.put_line(dbms_metadata.get_ddl('TABLE',i.table_name,'AVROY'));
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
END LOOP;
END;
SPOOL OFF;
You might want to use the export and import utility of Oracle database.
http://www.oracle-dba-online.com/export_and_import.htm

Creating INSERT-Statements to export the data from every exisiting table

I am currently not able to get a proper database dump, because the DB runs on a remote server inside a closed system --> no remote copying possible, only way to get files in/out is by being physically present at the servers location or via e-Mail (but I can't send a several GB big dump via mail...).
However, I still need the data in order to import it into my dev system.
I figure the best way of doing this is by creating INSERT statements that contain the needed information.
The SQL-Developer software can actually do this, but apparently it only works for one table at a time. As soon as one selects multiple tables the respective option disappears from the right-click-menu and one can only export the DDL statements :-/
So this approach is not really viable for me, as there are hundreds of tables...
Does anyone know of a standardized way to create INSERT statements via the querying of metadata tables (user_tables, user_columns, ...)? I could imagine that it might be possible to create all the statements by cleverly joining those meta tables. However, before dumping several hours into this approach, I'd appreciate if someone can confirm this suspicion first.
Also someone else must have had this problem before, so I hope that some of you may be able to give me a hint on other approaches. Thanks in advance!
My answer isn't full solution.
1) To extract DDL use.
select table_name,dbms_metadata.get_ddl(OBJECT_TYPE=>'TABLE', NAME=>table_name) from user_tables;
2) To extract record from table use xmltype (refucursor) and dbms_xmlstore to insert them.
Below only suggestion how to do this.
create table test as select level as "LP" from dual connect by level < 100;
declare
v_cursor sys_refcursor;
xmlDoc xmltype;
curid NUMBER;
insCtx DBMS_XMLSTORE.ctxType;
rows NUMBER;
begin
open v_cursor for 'select * from test';
xmlDoc := xmltype(v_cursor);
close v_cursor;
dbms_output.put_line(xmlDoc.getClobVal()); -- extracted row into xml.
insCtx := DBMS_XMLSTORE.newContext('test');
DBMS_XMLSTORE.clearUpdateColumnList(insCtx);
rows := DBMS_XMLSTORE.insertXML(insCtx, xmlDoc);
dbms_output.put_line('ROWS inserted' || rows);
DBMS_XMLSTORE.closeContext(insCtx);
commit;
end;

Creating table before creating a cursor in Oracle

I have a PL/SQL procedure which creates a temporary table and then extracts the data from this temporary table using cursors, processes the data and then drops the temporary table. However Oracle doesn't allow the usage of cursor if the table doesn't exist in the database.
Please help me handle this.
Your statement is not quite correct. You can use a cursor for pretty much arbitrary queries. See below:
create or replace procedure fooproc
IS
type acursor is ref cursor;
mycur acursor;
mydate date;
BEGIN
execute immediate 'create global temporary table footmp (bar date) on commit delete rows';
execute immediate 'insert into footmp values (SYSDATE)';
open mycur for 'select * from footmp';
loop
fetch mycur into mydate;
exit when mycur%notfound;
dbms_output.put_line(mydate);
end loop;
close mycur;
execute immediate 'drop table footmp';
END fooproc;
/
(More details here - especially this short proc is not safe at all since the table name is fixed and not session-dependent).
It is (quite) a bit ugly, and I'm not suggesting you use that - rather, you should be thinking whether you need that procedure-specific temporary table at all.
See this other article:
DO NOT dynamically create them [temp tables], DO NOT dynamically create them, please -- do NOT dynamically create them.
Couldn't you use a global temporary table? Do you actually need a temporary table at all? (i.e. doesn't using a cursor on the select statement you'd use to fill that table work?)
Or, if you wish to avoid differences between global temporary tables and "regular" permanent tables you may be used to (see Oracle docs on temp table data availability, lifetime etc), simply create the table first (nologging). Assuming nobody else is using this table, your procedure could truncate before/after your processing.

Dumping CLOB fields into files?

Say you have the table:
Column_name | data_type
Title | Varchar2
Text | CLOB
with some rows:
SomeUnkownMovie | A long time ago in a galaxy far, far away....(long text ahead)
FredMercuryBio | Awesomeness and stuff....(more long text)
Is there a way I could query that so it outputs files like
SomeUnkownMovie.txt
FredMercuryBio.txt
(and ofc, with their respective texts inside)
I reckon this should be a easy enough sqlplus script.. though I'm just not the one :(
thanks!
This pl/sql code should work in oracle 11g.
It dumps the text of the clobs into a directory with the title as filename.
BEGIN
FOR rec IN (
select title, text
from mytable
)
LOOP
DBMS_XSLPROCESSOR.clob2file(rec.text, 'DUMP_SOURCES', rec.title ||'.txt');
END LOOP;
END;
If DBMS_XSLPROCESSOR isn't available then you could replace DBMS_XSLPROCESSOR.clob2file with a procedure that uses UTL_FILE.
For example :
CREATE OR REPLACE PROCEDURE CLOB2FILE (
clob_in IN CLOB,
directory_name IN VARCHAR2,
file_name IN VARCHAR2
)
IS
file_handle UTL_FILE.FILE_TYPE;
clob_part VARCHAR2(1024);
clob_length NUMBER;
offset NUMBER := 1;
BEGIN
clob_length := LENGTH(clob_in);
file_handle := UTL_FILE.FOPEN(directory_name, file_name, 'W');
LOOP
EXIT WHEN offset >= clob_length;
clob_part := DBMS_LOB.SUBSTR (clob_in, 1024, offset);
UTL_FILE.PUT(file_handle, clob_part);
offset := offset + 1024;
END LOOP;
UTL_FILE.FFLUSH(file_handle);
UTL_FILE.FCLOSE(file_handle);
EXCEPTION
WHEN OTHERS THEN
UTL_FILE.FCLOSE(file_handle);
RAISE;
END;
Or perhaps replace DBMS_XSLPROCESSOR.clob2file with dbms_advisor.create_file.
Are you trying to generate files on the database server file system? Or on the client file system?
If you are trying to generate files on the database server file system, there is an example of exporting a CLOB to a file in another StackOverflow thread that is based on Tim Hall's LOB export examples (Tim's site appears to be down at the moment).
If you're trying to generate files on the client file system, it would involve much more complex SQLPlus scripting. You'd be looking at doing something like querying the table and using the data to dynamically generate one SQLPlus script per file that you wanted to generate and then dynamically calling those scripts. You'd be really pushing SQL*Plus's scripting capabilities so that's not an architecture that I would generally advocate but I believe it could be done.
If you do need to generate files on the client file system, I'd generally prefer to use something other than SQLPlus. For example, there is an example of a small Java class that reads and writes CLOB and BLOB data to and from files on the AskTom site. I'd tend to write a small Java utility that ran on the client and exported the data rather than trying to put too much logic in SQLPlus scripts.

Resources