Dumping CLOB fields into files? - oracle

Say you have the table:
Column_name | data_type
Title | Varchar2
Text | CLOB
with some rows:
SomeUnkownMovie | A long time ago in a galaxy far, far away....(long text ahead)
FredMercuryBio | Awesomeness and stuff....(more long text)
Is there a way I could query that so it outputs files like
SomeUnkownMovie.txt
FredMercuryBio.txt
(and ofc, with their respective texts inside)
I reckon this should be a easy enough sqlplus script.. though I'm just not the one :(
thanks!

This pl/sql code should work in oracle 11g.
It dumps the text of the clobs into a directory with the title as filename.
BEGIN
FOR rec IN (
select title, text
from mytable
)
LOOP
DBMS_XSLPROCESSOR.clob2file(rec.text, 'DUMP_SOURCES', rec.title ||'.txt');
END LOOP;
END;
If DBMS_XSLPROCESSOR isn't available then you could replace DBMS_XSLPROCESSOR.clob2file with a procedure that uses UTL_FILE.
For example :
CREATE OR REPLACE PROCEDURE CLOB2FILE (
clob_in IN CLOB,
directory_name IN VARCHAR2,
file_name IN VARCHAR2
)
IS
file_handle UTL_FILE.FILE_TYPE;
clob_part VARCHAR2(1024);
clob_length NUMBER;
offset NUMBER := 1;
BEGIN
clob_length := LENGTH(clob_in);
file_handle := UTL_FILE.FOPEN(directory_name, file_name, 'W');
LOOP
EXIT WHEN offset >= clob_length;
clob_part := DBMS_LOB.SUBSTR (clob_in, 1024, offset);
UTL_FILE.PUT(file_handle, clob_part);
offset := offset + 1024;
END LOOP;
UTL_FILE.FFLUSH(file_handle);
UTL_FILE.FCLOSE(file_handle);
EXCEPTION
WHEN OTHERS THEN
UTL_FILE.FCLOSE(file_handle);
RAISE;
END;
Or perhaps replace DBMS_XSLPROCESSOR.clob2file with dbms_advisor.create_file.

Are you trying to generate files on the database server file system? Or on the client file system?
If you are trying to generate files on the database server file system, there is an example of exporting a CLOB to a file in another StackOverflow thread that is based on Tim Hall's LOB export examples (Tim's site appears to be down at the moment).
If you're trying to generate files on the client file system, it would involve much more complex SQLPlus scripting. You'd be looking at doing something like querying the table and using the data to dynamically generate one SQLPlus script per file that you wanted to generate and then dynamically calling those scripts. You'd be really pushing SQL*Plus's scripting capabilities so that's not an architecture that I would generally advocate but I believe it could be done.
If you do need to generate files on the client file system, I'd generally prefer to use something other than SQLPlus. For example, there is an example of a small Java class that reads and writes CLOB and BLOB data to and from files on the AskTom site. I'd tend to write a small Java utility that ran on the client and exported the data rather than trying to put too much logic in SQLPlus scripts.

Related

How to rename multiple stored procedures in Oracle

I'm using Oracle 12C, now I'm in a following trouble:
I have multiple stored procedures like:
schema.TEST1, schema.TEST2, schema.TEST3....
Now, I want to rename all of them to schema.TEST01, schema.TEST02, schema.TEST03...or any name I want which was formatted before, this is for backup.
In Oracle, I can't rename a stored procedure using a ALTER statement rename like SQL. How can I do this with one click?
Thanks!
Make changes according to your schema and naming convention.
But it is nonsense, you do not need to backup in that way.
But I took it as a challenge and would like to present you the below code
Use CLOB if source text is large enough.
DECLARE
type names_table is table of VARCHAR2(50);
names names_table;
TYPE source_txt_table is TABLE OF VARCHAR2(32767);
source_txt source_txt_table;
header VARCHAR2(32767);
final_sourc_txt VARCHAR2(32767);
BEGIN
SELECT OBJECT_NAME bulk COLLECT into names from user_procedures WHERE object_type = 'PROCEDURE' AND OBJECT_NAME IN ('DO_SOMETHING_1','DO_SOMETHING_2');
FOR i in 1..names.LAST
LOOP
SELECT text bulk COLLECT into source_txt
FROM all_source
WHERE name = names(i)
ORDER BY line;
source_txt(1) := 'CREATE OR REPLACE '||source_txt(1);
header := REGEXP_REPLACE(upper(source_txt(1)), names(i), 'HR.'||names(i)||'_bck'); --make changes according to new naming convention
source_txt(1) := header;
FOR j in 1..source_txt.LAST
LOOP
final_sourc_txt := final_sourc_txt||source_txt(j);
END LOOP;
EXECUTE IMMEDIATE final_sourc_txt;
dbms_output.put_line('Success: '|| names(i));
final_sourc_txt := NULL;
header := NULL;
source_txt := NULL;
END LOOP;
END;
For backup? That's rather poorly chosen backup system.
what if database dies because of disk failure? You'll lose everything (including your "backup" procedures)
how many "backups" do you plan to keep? For example, one of my schemas contains 643 procedures/functions/packages. With two backups, I'm already close to 2K objects. If you perform backup regularly (e.g. daily), in a matter of only a month, I'd be close to 20K objects. I really wouldn't want to do that
Therefore, why wouldn't you consider something else? For example,
version control system (such as Git)
perform Data Pump Export as a "logical" backup
let DBA take care about RMAN backup
if you want to do it manually, some GUI tools (such as TOAD) let you select all and create script - that option stores source code as files on your hard disk drive, and then you can backup those files somewhere else (burn them on a DVD, copy to USB memory stick, another hard disk drive, somewhere within your network ...)
Finally, to answer your question: how to do what you asked for in one click? As far as I can tell, you can't. You'd first have to write a procedure which would do the job, but then you're back to my second objection to your approach. How will that procedure know that proc1 is "original", while proc01 is a backup version? Why wouldn't someone name their procedures proc05 initially? That's a valid name.
You can also try using DBMS_METADATA PACKAGE to export DDLs of the schema object.
I have written an example, you can use it after modifying it according to your needs.
CREATE DIRECTORY EXTERNAL AS '/external/';
DECLARE
h PLS_INTEGER;
th PLS_INTEGER;
fh utl_file.file_type;
ddls CLOB;
SYSD VARCHAR2(50);
BEGIN
h := dbms_metadata.open('PROCEDURE');
DBMS_METADATA.set_filter(h, 'SCHEMA','HR');
th := DBMS_METADATA.ADD_TRANSFORM (h, 'DDL');
DBMS_METADATA.SET_COUNT(h, 50);
ddls := dbms_metadata.fetch_clob(h);
SELECT TO_CHAR(SYSDATE, 'YYYYMMDDHHMISS') INTO SYSD FROM dual;
fh := utl_file.fopen('EXTERNAL', 'SCHEMA_BCK_'||SYSD||'.bck', 'w');
utl_file.put(fh, ddls);
UTL_FILE.FCLOSE(fh);
DBMS_METADATA.CLOSE(h);
END;
It is far safer against database failures and you will not unnecessarily populate your database schema with backup objects.

How to load an extracted ORACLE CLOB into only 1 TEXT column in Postgres?

I'm currently looking at migrating CLOB data from ORACLE into Postgres from an external file. I have created my table in Postgres and the data type I'm using is TEXT which will replicate using a CLOB in ORACLE and now I just need to get my data in.
So far what I've done is extract a CLOB column from ORACLE into a file as per the below, it is only 1 CLOB from 1 COLUMN so Iā€™m trying to load the contents of this entire CLOB into 1 column in Postgres..
CREATE TABLE clob_test (
id number,
clob_col CLOB);
DECLARE
c CLOB;
CURSOR scur IS
SELECT text
FROM dba_source
WHERE rownum < 200001;
BEGIN
EXECUTE IMMEDIATE 'truncate table clob_test';
FOR srec IN scur LOOP
c := c || srec.text;
END LOOP;
INSERT INTO clob_test VALUES (1, c);
COMMIT;
END;
/
DECLARE
buf CLOB;
BEGIN
SELECT clob_col
INTO buf
FROM clob_test
WHERE id = 1;
dbms_advisor.create_file(buf, 'TEST_DIR', 'clob_1.txt');
END;
/
This works fine and generates the clob_1.txt file containing all the contents of the ORACLE CLOB column CLOB_COL. Below is an example of the file output, it seems to contain every possible character you can think of including "~"...
/********** Types and subtypes, do not reorder **********/
type BOOLEAN is (FALSE, TRUE);
type DATE is DATE_BASE;
type NUMBER is NUMBER_BASE;
subtype FLOAT is NUMBER; -- NUMBER(126)
subtype REAL is FLOAT; -- FLOAT(63)
...
...
...
END;
/
My problem now is how do I get the entire contents of this 1 file into 1 record in Postgres so it simulates exactly how the data was originally stored in 1 record in ORACLE?
Effectively what I'm trying to achieve is similar to this, it works but the formatting is awful and doesn't really mirror how the data was originally stored.
POSTGRES> insert into clob_test select pg_read_file('/home/oracle/clob_1.txt');
I have tried using the COPY command but I'm having 2 issues. Firstly if there is a carriage return it will see that as another record and split the file up and the second issue is I can't find a delimiter which isn't being used in the file. Is there some way I can bypass the delimiter and just tell Postgres to COPY everything from this file without delimiters as it's only 1 column?
Any help would be great šŸ˜Š
Note for other answerers: This is incomplete and will still put the data into multiple records; the question also wants all the data in a single field.
Use COPY ... FROM ... CSV DELIMITER e'\x01' QUOTE e'\x02'. The only thing this can't handle is actual binary blobs, which, as I understand it, is not permitted in CLOB (I have never used Oracle myself). This only avoids the delimiter issue; it will still insert the data into one row per line of the input.
I'm not sure how to go about fixing that issue, but you should be aware that it's probably not possible to do this correctly in all cases. The largest field value PG supports is 1gb, while CLOB supports up to 4GB. If you need to correctly import >1GB CLOBs, the only route available is PG's large object interface.

Not getting expected output in cursor by iterations [duplicate]

I have an SQL script that is called from within a shell script and takes a long time to run. It currently contains dbms_output.put_line statements at various points. The output from these print statements appear in the log files, but only once the script has completed.
Is there any way to ensure that the output appears in the log file as the script is running?
Not really. The way DBMS_OUTPUT works is this: Your PL/SQL block executes on the database server with no interaction with the client. So when you call PUT_LINE, it is just putting that text into a buffer in memory on the server. When your PL/SQL block completes, control is returned to the client (I'm assuming SQLPlus in this case); at that point the client gets the text out of the buffer by calling GET_LINE, and displays it.
So the only way you can make the output appear in the log file more frequently is to break up a large PL/SQL block into multiple smaller blocks, so control is returned to the client more often. This may not be practical depending on what your code is doing.
Other alternatives are to use UTL_FILE to write to a text file, which can be flushed whenever you like, or use an autonomous-transaction procedure to insert debug statements into a database table and commit after each one.
If it is possible to you, you should replace the calls to dbms_output.put_line by your own function.
Here is the code for this function WRITE_LOG -- if you want to have the ability to choose between 2 logging solutions:
write logs to a table in an autonomous transaction
CREATE OR REPLACE PROCEDURE to_dbg_table(p_log varchar2)
-- table mode:
-- requires
-- CREATE TABLE dbg (u varchar2(200) --- username
-- , d timestamp --- date
-- , l varchar2(4000) --- log
-- );
AS
pragma autonomous_transaction;
BEGIN
insert into dbg(u, d, l) values (user, sysdate, p_log);
commit;
END to_dbg_table;
/
or write directly to the DB server that hosts your database
This uses the Oracle directory TMP_DIR
CREATE OR REPLACE PROCEDURE to_dbg_file(p_fname varchar2, p_log varchar2)
-- file mode:
-- requires
--- CREATE OR REPLACE DIRECTORY TMP_DIR as '/directory/where/oracle/can/write/on/DB_server/';
AS
l_file utl_file.file_type;
BEGIN
l_file := utl_file.fopen('TMP_DIR', p_fname, 'A');
utl_file.put_line(l_file, p_log);
utl_file.fflush(l_file);
utl_file.fclose(l_file);
END to_dbg_file;
/
WRITE_LOG
Then the WRITE_LOG procedure which can switch between the 2 uses, or be deactivated to avoid performances loss (g_DEBUG:=FALSE).
CREATE OR REPLACE PROCEDURE write_log(p_log varchar2) AS
-- g_DEBUG can be set as a package variable defaulted to FALSE
-- then change it when debugging is required
g_DEBUG boolean := true;
-- the log file name can be set with several methods...
g_logfname varchar2(32767) := 'my_output.log';
-- choose between 2 logging solutions:
-- file mode:
g_TYPE varchar2(7):= 'file';
-- table mode:
--g_TYPE varchar2(7):= 'table';
-----------------------------------------------------------------
BEGIN
if g_DEBUG then
if g_TYPE='file' then
to_dbg_file(g_logfname, p_log);
elsif g_TYPE='table' then
to_dbg_table(p_log);
end if;
end if;
END write_log;
/
And here is how to test the above:
1) Launch this (file mode) from your SQLPLUS:
BEGIN
write_log('this is a test');
for i in 1..100 loop
DBMS_LOCK.sleep(1);
write_log('iter=' || i);
end loop;
write_log('test complete');
END;
/
2) on the database server, open a shell and
tail -f -n500 /directory/where/oracle/can/write/on/DB_server/my_output.log
Two alternatives:
You can insert your logging details in a logging table by using an autonomous transaction. You can query this logging table in another SQLPLUS/Toad/sql developer etc... session. You have to use an autonomous transaction to make it possible to commit your logging without interfering the transaction handling in your main sql script.
Another alternative is to use a pipelined function that returns your logging information. See here for an example: http://berxblog.blogspot.com/2009/01/pipelined-function-vs-dbmsoutput.html When you use a pipelined function you don't have to use another SQLPLUS/Toad/sql developer etc... session.
the buffer of DBMS_OUTPUT is read when the procedure DBMS_OUTPUT.get_line is called. If your client application is SQL*Plus, it means it will only get flushed once the procedure finishes.
You can apply the method described in this SO to write the DBMS_OUTPUT buffer to a file.
Set session metadata MODULE and/or ACTION using dbms_application_info().
Monitor with OEM, for example:
Module: ArchiveData
Action: xxx of xxxx
If you have access to system shell from PL/SQL environment you can call netcat:
BEGIN RUN_SHELL('echo "'||p_msg||'" | nc '||p_host||' '||p_port||' -w 5'); END;
p_msg - is a log message
v_host is a host running python script that reads data from socket on port v_port.
I used this design when I wrote aplogr for real-time shell and pl/sql logs monitoring.

Partially consuming a cursor in multiple pl/sql calls without defining it in package spec

I have a large source data set (a few million rows) that requires complex processing, resulting in much larger amount of data, which should be then offloaded and stored as files. The storage requires dividing up resulting data based on certain parameters, namely N source rows that meet certain criteria.
Since it's possible to compute the said parameters within PL/SQL, it was decided that the most efficient way would be to create a package, specify a spec-level cursor for source rows in it, then write a procedure that would partially consume the opened cursor until the criteria is meet and fill temporary tables with resulting data, which would then be offloaded, and the procedure would be called again, repeating until there's no more source rows. PL/SQL basically looks like this:
create or replace PACKAGE BODY generator as
cursor glob_cur_body(cNo number) is
select *
from source_table
where no = cNo
order by conditions;
procedure close_cur is
begin
if glob_cur_body%isopen then
close glob_cur_body;
end if;
end close_cur;
procedure open_cur(pNo number) is
begin
close_cur;
open glob_cur_body(pNo);
end open_cur;
function consume_cur return varchar2 is
v source_table%rowtype;
part_id varchar2(100);
begin
fetch glob_cur_body into v;
if glob_cur_body%notfound then
return null;
end if;
--Clear temporary tables
--Do the processing until criteria is meet of there's no more rows
--Fill the temporary tables and part_id
return part_id;
end consume_cur;
end generator;
And the consumer is doing the following (in pseudocode)
generator.open_cur;
part_id = generator.consume;
while ( part_id != null )
{
//offload data from temp tables
part_id = generator.consume;
}
generator.close_cur;
It's working fine, but unfortunately there's one problem: a spec-level cursor makes the package stateful, meaning that its recompilation results in ORA-04068 for sessions that already accessed it before. It makes maintenance cumbersome, because there's a lot more to the package besides said functions, and it's actively used for unrelated purposes.
So, I want to get rid of the spec-level cursor, but I'm not sure if that's possible. Some ideas I've already discarded:
Re-opening the cursor and skipping N rows: terrible performance, unreliable because affected by any changes to data made between opens
Fetching the source cursor into plsql table: size too large.
Filling up the entire unload tables at once, splitting them later: size too large, subpar performance.
Opening the cursor as refcursor and storing refcursor variable in a dedicated package: impossible, as pl/sql doesn't allow sys_refcursor variables at spec levels
Having open_cur procedures return refcursor, storing it in the offloader, and then somehow passing it to consume_cur: looked viable, but the offloader is in Java, and JDBC doesn't allow binding of SYS_REFCURSOR parameters.
Changing consume_cur to pipelined function: could have worked, but oracle buffers pipelined rows, meaning it would execute multiple times when fetching data from it row-by-row. Also counterintuitive.
Only other idea I've had so far is to make a dedicated package storing said cursor, having open and close procedures and get_cursor returning refcursor; then call get_cursor from generator.consume_cur. That would make the dedicated package (which is unlikely to change) stateful and main package stateless. However, it seems like a half-baked patch rather than a problem solution. Is there a more decent way of achieving what I need? Perhaps changing the logic completely without affecting performance and storage limits too much.
I have a problem to understand your question. But I can provide clarification for your ideas.
Opening the cursor as refcursor and storing refcursor variable in a
dedicated package: impossible, as pl/sql doesn't allow sys_refcursor
variables at spec levels
The workaround with dbms_sql.
create table test_rows as (select level rr from dual connect by level <= 100);
create or replace package cursor_ctx is
ctx_number integer;
end;
declare
p_cursor sys_refcursor;
begin
open p_cursor for 'select rr from test_rows';
cursor_ctx.ctx_number := DBMS_SQL.TO_CURSOR_NUMBER(p_cursor);
end;
This part consuming is data from the cursor.
declare
p_cursor sys_refcursor;
type l_number is table of number;
v_numbers l_number;
begin
if DBMS_SQL.IS_OPEN(cursor_ctx.ctx_number) then
p_cursor := DBMS_SQL.TO_REFCURSOR( cursor_ctx.ctx_number);
fetch p_cursor bulk collect into v_numbers limit 10;
if v_numbers.count < 10 then
dbms_output.put_line('No more data, close cursor');
close p_cursor;
cursor_ctx.ctx_number := null;
else
cursor_ctx.ctx_number := DBMS_SQL.TO_CURSOR_NUMBER(p_cursor);
end if;
for i in nvl(v_numbers.first,1) .. nvl(v_numbers.last,-1) loop
dbms_output.put_line(v_numbers(i));
end loop;
else
dbms_output.put_line('Null or cursor close ');
end if;
end;
Pipelined function has future to split input cursor into chunk. Parallel Enabled Pipelined Table Functions
JDBC allows using sys_refcursor as an output parameter. sys_refcursor = ResultSet.

How to pass a table name as an argument in a saved procedure in Oracle APEX

General question: How do I pass a table name as a parameter to a procedure in Oracle APEX? For example, suppose I want to run the following SQL statement:
SELECT SOME_VALUE FROM A_TABLE
What is the simplest, most dumbed down code to do this? I'm a SQL beginner, and have basically only glanced at PL/SQL.
In more detail, I have the following procedure from the Advanced Tutorials to allow me to download a BLOB from a custom table, but the bit of code which specifies the table to download from is hard coded. I'd rather not have to make a new procedure for each table I want to store BLOBs in.
CREATE OR REPLACE PROCEDURE download_my_file(p_file in number) AS
v_mime VARCHAR2(48);
v_length NUMBER;
v_file_name VARCHAR2(2000);
Lob_loc BLOB;
BEGIN
SELECT MIME_TYPE, BLOB_CONTENT, name,DBMS_LOB.GETLENGTH(blob_content)
INTO v_mime,lob_loc,v_file_name,v_length
FROM oehr_file_subject
WHERE id = p_file;
--
-- set up HTTP header
--
-- use an NVL around the mime type and
-- if it is a null set it to application/octect
-- application/octect may launch a download window from windows
owa_util.mime_header( nvl(v_mime,'application/octet'), FALSE );
-- set the size so the browser knows how much to download
htp.p('Content-length: ' || v_length);
-- the filename will be used by the browser if the users does a save as
htp.p('Content-Disposition: attachment; filename="'||replace(replace(substr(v_file_name,instr(v_file_name,'/')+1),chr(10),null),chr(13),null)|| '"');
-- close the headers
owa_util.http_header_close;
-- download the BLOB
wpg_docload.download_file( Lob_loc );
end download_my_file;
Thanks!
Use dynamic SQL with execute immediate, it accepts queries as parameter. So, in your dumbed down example it would be:
execute immediate 'select acolumn from '||p_tablename using l_column;
Detailed info and examples here :
http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/executeimmediate_statement.htm

Resources