Write contents into UTL_FILE immediately after a single iteration - oracle

I have PL/SQL Block, it query from a Table function and I use a cursor to process it record by record, have some business logic and finally write the Qualified records into the file.
Number of records to be processed is upto 1 Million. And the total processing speed is roughly 10000 records per minute.(After testing with few chunks of data)
Now, that I need to indicate the processing status in a different environment, JSP.
DECLARE
vSFile utl_file.file_type;
vNewLine VARCHAR2(200);
my_cursor IS SELECT * FROM MYTABLE;
my_details my_cursor%rowtype;
BEGIN
vSFile := utl_file.fopen('ORALOAD', file_name,'r');
IF utl_file.is_open(vSFile) THEN
utl_file.get_line(vSFile, vNewLine);
OPEN my_cursor;
LOOP
FETCH my_cursor INTO my_details;
EXIT WHEN sll_cur%NOTFOUND;
-- Do processing
utl_file.putf(logfile,'%s ',my_details);
-- A info tht record completed!
END LOOP;
CLOSE logfile;
CLOSE my_cursor;
END IF;
EXCEPTION
WHEN OTHERS THEN
--Error handling
END;
/
The log information written here, is not available until the completion of the process. So, I am unable to track, how far it is completed.
Can someone please assist me on this?

To write data to the file you should use the FFLUSH procedure. For instance:
OPEN my_cursor;
LOOP
FETCH my_cursor INTO my_details;
EXIT WHEN sll_cur%NOTFOUND;
-- Do processing
utl_file.putf(logfile,'%s ',my_details);
-- Call the FFLUSH proc here..And contents are available immediately.
utl_file.FFLUSH(logfile);
END LOOP;
From the documentation:
FFLUSH physically writes pending data to the file identified by the
file handle. Normally, data being written to a file is buffered. The
FFLUSH procedure forces the buffered data to be written to the file.
The data must be terminated with a newline character.
Flushing is useful when the file must be read while still open. For
example, debugging messages can be flushed to the file so that they
can be read immediately.
There is more information on UTL_FILE in the Oracle docs.

Related

Not getting expected output in cursor by iterations [duplicate]

I have an SQL script that is called from within a shell script and takes a long time to run. It currently contains dbms_output.put_line statements at various points. The output from these print statements appear in the log files, but only once the script has completed.
Is there any way to ensure that the output appears in the log file as the script is running?
Not really. The way DBMS_OUTPUT works is this: Your PL/SQL block executes on the database server with no interaction with the client. So when you call PUT_LINE, it is just putting that text into a buffer in memory on the server. When your PL/SQL block completes, control is returned to the client (I'm assuming SQLPlus in this case); at that point the client gets the text out of the buffer by calling GET_LINE, and displays it.
So the only way you can make the output appear in the log file more frequently is to break up a large PL/SQL block into multiple smaller blocks, so control is returned to the client more often. This may not be practical depending on what your code is doing.
Other alternatives are to use UTL_FILE to write to a text file, which can be flushed whenever you like, or use an autonomous-transaction procedure to insert debug statements into a database table and commit after each one.
If it is possible to you, you should replace the calls to dbms_output.put_line by your own function.
Here is the code for this function WRITE_LOG -- if you want to have the ability to choose between 2 logging solutions:
write logs to a table in an autonomous transaction
CREATE OR REPLACE PROCEDURE to_dbg_table(p_log varchar2)
-- table mode:
-- requires
-- CREATE TABLE dbg (u varchar2(200) --- username
-- , d timestamp --- date
-- , l varchar2(4000) --- log
-- );
AS
pragma autonomous_transaction;
BEGIN
insert into dbg(u, d, l) values (user, sysdate, p_log);
commit;
END to_dbg_table;
/
or write directly to the DB server that hosts your database
This uses the Oracle directory TMP_DIR
CREATE OR REPLACE PROCEDURE to_dbg_file(p_fname varchar2, p_log varchar2)
-- file mode:
-- requires
--- CREATE OR REPLACE DIRECTORY TMP_DIR as '/directory/where/oracle/can/write/on/DB_server/';
AS
l_file utl_file.file_type;
BEGIN
l_file := utl_file.fopen('TMP_DIR', p_fname, 'A');
utl_file.put_line(l_file, p_log);
utl_file.fflush(l_file);
utl_file.fclose(l_file);
END to_dbg_file;
/
WRITE_LOG
Then the WRITE_LOG procedure which can switch between the 2 uses, or be deactivated to avoid performances loss (g_DEBUG:=FALSE).
CREATE OR REPLACE PROCEDURE write_log(p_log varchar2) AS
-- g_DEBUG can be set as a package variable defaulted to FALSE
-- then change it when debugging is required
g_DEBUG boolean := true;
-- the log file name can be set with several methods...
g_logfname varchar2(32767) := 'my_output.log';
-- choose between 2 logging solutions:
-- file mode:
g_TYPE varchar2(7):= 'file';
-- table mode:
--g_TYPE varchar2(7):= 'table';
-----------------------------------------------------------------
BEGIN
if g_DEBUG then
if g_TYPE='file' then
to_dbg_file(g_logfname, p_log);
elsif g_TYPE='table' then
to_dbg_table(p_log);
end if;
end if;
END write_log;
/
And here is how to test the above:
1) Launch this (file mode) from your SQLPLUS:
BEGIN
write_log('this is a test');
for i in 1..100 loop
DBMS_LOCK.sleep(1);
write_log('iter=' || i);
end loop;
write_log('test complete');
END;
/
2) on the database server, open a shell and
tail -f -n500 /directory/where/oracle/can/write/on/DB_server/my_output.log
Two alternatives:
You can insert your logging details in a logging table by using an autonomous transaction. You can query this logging table in another SQLPLUS/Toad/sql developer etc... session. You have to use an autonomous transaction to make it possible to commit your logging without interfering the transaction handling in your main sql script.
Another alternative is to use a pipelined function that returns your logging information. See here for an example: http://berxblog.blogspot.com/2009/01/pipelined-function-vs-dbmsoutput.html When you use a pipelined function you don't have to use another SQLPLUS/Toad/sql developer etc... session.
the buffer of DBMS_OUTPUT is read when the procedure DBMS_OUTPUT.get_line is called. If your client application is SQL*Plus, it means it will only get flushed once the procedure finishes.
You can apply the method described in this SO to write the DBMS_OUTPUT buffer to a file.
Set session metadata MODULE and/or ACTION using dbms_application_info().
Monitor with OEM, for example:
Module: ArchiveData
Action: xxx of xxxx
If you have access to system shell from PL/SQL environment you can call netcat:
BEGIN RUN_SHELL('echo "'||p_msg||'" | nc '||p_host||' '||p_port||' -w 5'); END;
p_msg - is a log message
v_host is a host running python script that reads data from socket on port v_port.
I used this design when I wrote aplogr for real-time shell and pl/sql logs monitoring.

How to print sqlplus output immediate

I have very simple pl\sql code.
In this code I'm printing the index of the loop and wait 1 second before each print.
My problem is that I want this output to be used like a live log, when the dbms_output.put_line procedure is invoked and print - I want to see the output immediately.
In the current code - only after it finished (5 seconds..), it prints all the output in one shot...
set serveroutput on
set echo on
begin
for i in 1..5
loop
dba_maint.pkg_utils.sp_sleep(1);
dbms_output.put_line(i);
end loop;
end;
/
No way, you can't. It is displayed when PL/SQL procedure has finished.
If you want to create a live log,
create a table
a sequence
an autonomous transaction procedure which would
insert a row into that table
using a sequence (so that you'd know how to order rows)
possibly a timestamp (so that you'd know how long certain step took)
commit (which won't affect main transaction as - remember - procedure is an autonomous transaction one)
Then put calls to the logging procedure into your long-time-run PL/SQL procedure, run it, and let it work. In another session, query the log table to view progress.

Partially consuming a cursor in multiple pl/sql calls without defining it in package spec

I have a large source data set (a few million rows) that requires complex processing, resulting in much larger amount of data, which should be then offloaded and stored as files. The storage requires dividing up resulting data based on certain parameters, namely N source rows that meet certain criteria.
Since it's possible to compute the said parameters within PL/SQL, it was decided that the most efficient way would be to create a package, specify a spec-level cursor for source rows in it, then write a procedure that would partially consume the opened cursor until the criteria is meet and fill temporary tables with resulting data, which would then be offloaded, and the procedure would be called again, repeating until there's no more source rows. PL/SQL basically looks like this:
create or replace PACKAGE BODY generator as
cursor glob_cur_body(cNo number) is
select *
from source_table
where no = cNo
order by conditions;
procedure close_cur is
begin
if glob_cur_body%isopen then
close glob_cur_body;
end if;
end close_cur;
procedure open_cur(pNo number) is
begin
close_cur;
open glob_cur_body(pNo);
end open_cur;
function consume_cur return varchar2 is
v source_table%rowtype;
part_id varchar2(100);
begin
fetch glob_cur_body into v;
if glob_cur_body%notfound then
return null;
end if;
--Clear temporary tables
--Do the processing until criteria is meet of there's no more rows
--Fill the temporary tables and part_id
return part_id;
end consume_cur;
end generator;
And the consumer is doing the following (in pseudocode)
generator.open_cur;
part_id = generator.consume;
while ( part_id != null )
{
//offload data from temp tables
part_id = generator.consume;
}
generator.close_cur;
It's working fine, but unfortunately there's one problem: a spec-level cursor makes the package stateful, meaning that its recompilation results in ORA-04068 for sessions that already accessed it before. It makes maintenance cumbersome, because there's a lot more to the package besides said functions, and it's actively used for unrelated purposes.
So, I want to get rid of the spec-level cursor, but I'm not sure if that's possible. Some ideas I've already discarded:
Re-opening the cursor and skipping N rows: terrible performance, unreliable because affected by any changes to data made between opens
Fetching the source cursor into plsql table: size too large.
Filling up the entire unload tables at once, splitting them later: size too large, subpar performance.
Opening the cursor as refcursor and storing refcursor variable in a dedicated package: impossible, as pl/sql doesn't allow sys_refcursor variables at spec levels
Having open_cur procedures return refcursor, storing it in the offloader, and then somehow passing it to consume_cur: looked viable, but the offloader is in Java, and JDBC doesn't allow binding of SYS_REFCURSOR parameters.
Changing consume_cur to pipelined function: could have worked, but oracle buffers pipelined rows, meaning it would execute multiple times when fetching data from it row-by-row. Also counterintuitive.
Only other idea I've had so far is to make a dedicated package storing said cursor, having open and close procedures and get_cursor returning refcursor; then call get_cursor from generator.consume_cur. That would make the dedicated package (which is unlikely to change) stateful and main package stateless. However, it seems like a half-baked patch rather than a problem solution. Is there a more decent way of achieving what I need? Perhaps changing the logic completely without affecting performance and storage limits too much.
I have a problem to understand your question. But I can provide clarification for your ideas.
Opening the cursor as refcursor and storing refcursor variable in a
dedicated package: impossible, as pl/sql doesn't allow sys_refcursor
variables at spec levels
The workaround with dbms_sql.
create table test_rows as (select level rr from dual connect by level <= 100);
create or replace package cursor_ctx is
ctx_number integer;
end;
declare
p_cursor sys_refcursor;
begin
open p_cursor for 'select rr from test_rows';
cursor_ctx.ctx_number := DBMS_SQL.TO_CURSOR_NUMBER(p_cursor);
end;
This part consuming is data from the cursor.
declare
p_cursor sys_refcursor;
type l_number is table of number;
v_numbers l_number;
begin
if DBMS_SQL.IS_OPEN(cursor_ctx.ctx_number) then
p_cursor := DBMS_SQL.TO_REFCURSOR( cursor_ctx.ctx_number);
fetch p_cursor bulk collect into v_numbers limit 10;
if v_numbers.count < 10 then
dbms_output.put_line('No more data, close cursor');
close p_cursor;
cursor_ctx.ctx_number := null;
else
cursor_ctx.ctx_number := DBMS_SQL.TO_CURSOR_NUMBER(p_cursor);
end if;
for i in nvl(v_numbers.first,1) .. nvl(v_numbers.last,-1) loop
dbms_output.put_line(v_numbers(i));
end loop;
else
dbms_output.put_line('Null or cursor close ');
end if;
end;
Pipelined function has future to split input cursor into chunk. Parallel Enabled Pipelined Table Functions
JDBC allows using sys_refcursor as an output parameter. sys_refcursor = ResultSet.

Rollback procedure creating file

I have several PL/SQL procedures that export tables in a file using UTL_FILE.
Here's a snap:
PROCEDURE export_t1
AS
l_file UTL_FILE.FILE_TYPE;
record VARCHAR2(4096);
BEGIN
l_file := UTL_FILE.FOPEN(DIRECTORY_PATH, FILENAME, 'A');
FOR j IN
(SELECT * FROM PRODUCTS WHERE HANDLE = '0')
LOOP
l_record := j.id || ',' || j.code || ',' || j.desc ....... [others fields];
UTL_FILE.PUT_LINE(l_file,l_record);
END LOOP;
UTL_FILE.FCLOSE(l_file);
UPDATE PRODUCTS SET HANDLE = '1' WHERE HANDLE = '0';
EXCEPTION
WHEN OTHERS THEN
-- log
RAISE;
END export_t1;
So I have export_t1, export_t2, export_tn procedures. In addition I call these in a 'main' procedure sequentially..
My question is..if I have an exception in export_t2, which is the second
procedure, how can I block the first one (export_t1) to create the file
The idea is..create files just when those ALL procedures are gone OK, no exception
Unless you could get your file system to participate in a two-phase commit (which to my knowledge isn't possible right now), coordinating file output with your database transactions is going to be difficult because your file operations lie outside the scope of your database transaction.
I.e., there is always a theoretical scenario where something happens at exactly the wrong time and your database and file system are out of sync. (Sort of makes you appreciate everything COMMIT does for us).
Anyway, a possible strategy is to design things so the window for something going wrong is as short as possible. E.g.,
begin
delete_real_files; -- delete leftovers.
write_temp_file_n1;
write_temp_file_n2;
write_temp_file_n3;
...
write_temp_file_nx;
rename_temp_files_to_real;
commit;
-- don't do anything else with the files after this point
exception
when others then
remove_real_files;
remove_temp_files;
rollback;
end;
The idea here is that you write all the files to temp files. If there is a failure, you clean them up. No process could ever see the "real" files, because you never created them. Only at the end do you make the temporary files real, by renaming them.
Your risk here is that your first few temp files get renamed successfully, but the subsequent temp files cannot get renamed AND either (A) a process jumps in and sees them before your exception handler can remove them or (B) the exception handler cannot remove them for some reason.
I like this approach because it ties all the risk to renaming files, which is a pretty safe operation (since it does not require extra disk space). It's not very likely that some of the renames will succeed and some will fail.
Lots of variations on this approach are possible. But the thing to remember is that you're not implementing a rock-solid solution here. There's always a chance that something goes wrong, so implement whatever checks (elsewhere in your system) are required, depending on how much fault tolerance you have.
UTL_FILE.FCLOSE (or UTL_FILE.FFLUSH) literally write to disk. If you don't want to write to disk you must not write to disk - don't close or flush a file handler until after all the data has been written to each individual buffer.
Depending on how big n is you could have a lot of open file handlers with a lot of data buffered. This won't be pretty.
It would be better to create another procedure to call UTL_FILE.FREMOVE, which removes a named file (assuming sufficient privileges).
I would be doing this in the Oracle scheduler, given each procedure being a separate step in a chain you can define a rule using the scheduler chain condition syntax to call the procedure to remove the files on an error in te chain.
Michael,
You can probably use the --> utl_file.fremove(DIRECTORY_PATH,FILENAME); in the exception in order to delete the file.
Example code is given below.
Procedure 1 :
CREATE OR REPLACE PROCEDURE SHAREFLE IS
v_MyFileHandle UTL_FILE.FILE_TYPE;
BEGIN
v_MyFileHandle := UTL_FILE.FOPEN('TEST_DIR','HELLO.TXT','a');
UTL_FILE.PUT_LINE(v_MyFileHandle, 'Hello Again for the Second Time! ' ||
TO_CHAR(SYSDATE,'MM-DD-YY HH:MI:SS AM'));
UTL_FILE.FCLOSE(v_MyFileHandle);
SHAREFLE1;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE
('ERROR ' || TO_CHAR(SQLCODE) || SQLERRM);
NULL;
END;
Procedure 2:
CREATE OR REPLACE PROCEDURE SHAREFLE1 IS
v_MyFileHandle UTL_FILE.FILE_TYPE;
BEGIN
v_MyFileHandle := UTL_FILE.FOPEN('TEST_DIR','HELLO.TXT','a');
UTL_FILE.PUT_LINE(v_MyFileHandle, 'Hello Again for the Third Time! ' || TO_CHAR(SYSDATE,'MM-DD-YY HH:MI:SS AM'));
UTL_FILE.FCLOSE(v_MyFileHandle);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE
('ERROR ' || TO_CHAR(SQLCODE) || SQLERRM);
utl_file.fremove('TEST_DIR','HELLO.TXT');
NULL;
END;
Plsql block to call the first procedure.
set serveroutput on;
begin
sharefle;
end;
This code is a pretty simple example of what you have asked. If there is any exception you can check the procedure 2 that in the exception block the file 'HELLO.TXT' is removed(both Procedure 1 and procedure 2 has same file). I have personally checked it and the same is working. Try to create your own exception and check for yourself. In case of any doubt please do comment.
Note: This is NEVER THE BEST WAY TO DO IT. I HAVE SHOWED YOU THAT WE CAN DO IT THIS WAY. Thanks :)

MyBatis and DBMS_OUTPUT

I am using Oracle/MyBatis and trying to debug a stored procedure with an enormous amount of parameters. Inside the stored procedure I get a ORA-01438: value larger than specified precision allowed for this column
So my initial approach would be to do like dbms_output.put_line in the stored procedure to try to see what the values are right before the offending statement. Without MyBatis, I would ordinarily open up a sqlplus script and type set serveroutput on and then run my stored procedure at some later point to see all the debug messages come out. With MyBatis, I cannot figure out how (if possible) to get these debug statements.
I have the ibatis and sql debuggers set for DEBUG and I use log4j to log everything for my Tomcat 6 application.
The DBMS_OUTPUT package has a few other procedures that you could use. DBMS_OUTPUT.ENABLE functions much like the SQL*Plus command set serveroutput on in that it allocates a buffer for DBMS_OUTPUT.PUT_LINE to write to. DBMS_OUTPUT.GET_LINE can be used to fetch the data written to that buffer by previous calls to DBMS_OUTPUT.PUT_LINE. So it should be possible to call the ENABLE function, call the procedure which writes a number of lines to the buffer, and then call GET_LINE (or GET_LINES) to fetch the data that was written to the DBMS_OUTPUT buffer and write that data to your logs.
It may be simpler, however, to redirect the logging to an Oracle database table rather than trying to use DBMS_OUTPUT. One common approach is to create your own package that has a switch to determine whether to write to DBMS_OUTPUT or whether to write to a table. Something like
CREATE OR REPLACE PACKAGE p
AS
procedure l( p_str IN VARCHAR2 );
END;
CREATE OR REPLACE PACKAGE BODY p
AS
g_destination INTEGER;
g_destination_table CONSTANT INTEGER := 1;
g_destination_dbms_out CONSTANT INTEGER := 2;
PROCEDURE l( p_str IN VARCHAR2 )
AS
BEGIN
IF( g_destination = g_destination_dbms_out )
THEN
dbms_output.put_line( p_str );
ELSE
INSERT INTO log_table ...
END IF;
END;
BEGIN
g_destination := <<determine which constant to set it to. This
may involve querying a `SETTINGS` table, looking
at the environment, or something else>>
END;
END;

Resources