dbms_output.put_line - performance

Does dbms_output.put_line decrease the performance in plsql code?

Every extra line of code decreases the performance of code. After all, it is an extra instruction to be executed, which at least consumes some CPU. So yes, dbms_output.put_line decreases the performance.
The real question is: does the benefit of this extra line of code outweigh the performance penalty? Only you can answer that question.
Regards,
Rob.

Yes, it's another piece of code that needs to be executed, but unless the output is actually turned on, I think the overhead is quite minimal.
Here's an AskTom question with more details: Is there a performance impact for dbms_output.put_line statements left in packages?

You can look into conditional compilation so that the DBMS_OUTPUT.PUT_LINE are only in the pre-parsed code if the procedure is compiled with the appropriate option.
One question is, has DBMS_OUTPUT.ENABLE been called.
If so, any value in a DBMS_OUTPUT.PUT_LINE will be recorded in the session's memory structure. If you continue pushing stuff in there and never taking it out (which might be the case with some application server connections) you might find that after a few days you have a LOT of stuff in memory.

I use a log table instead of dbms_output. Make sure to setup as autonomous transaction, something like (modify for your needs of course):
create or replace package body somePackage as
...
procedure ins_log(
i_msg in varchar2,
i_msg_type in varchar2,
i_msg_code in number default 0,
i_msg_context in varchar2 default null
) IS PRAGMA AUTONOMOUS_TRANSACTION;
begin
insert into myLogTable
(
created_date,
msg,
msg_type,
msg_code,
msg_context
)
values
(
sysdate,
i_msg,
i_msg_type,
i_msg_code,
i_msg_context
);
commit;
end ins_log;
...
end;
Make sure you create your log table of course. In your code, if you're doing many operations in a loop, you may want to only log once per x num operations, something like:
create or replace myProcedure as
cursor some_cursor is
select * from someTable;
v_ctr pls_integer := 0;
begin
for rec in some_cursor
loop
v_ctr := v_ctr + 1;
-- do something interesting
if (mod(v_ctr, 1000) = 0) then
somePackage.ins_log('Inserted ' || v_ctr || ' records',
'Log',
i_msg_context=>'myProcedure');
end if;
end loop;
commit;
exception
when others then
somePackage.ins_log(SQLERRM, 'Err', i_msg_context=>'myProcedure');
rollback;
raise;
end;
Note that the autonomous transaction will ensure that your log stmt gets inserted, even if an error occurs and you rollback everything else (since its a separate transaction).
Hope this helps...much better than dbms_output ;)

It depends on the ratio of how many times you call dbms_output.put_line versus what else you do in PL/SQL.

Using DMBS_OUTPUT might also be the cause of the following error:
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Related

oracle clear session in LOOP

I,ve a very strange situation. I don't know how to solve. There exists not mine package,let's call him XXPACK, and I must call this package's procedure in loop over table(Assume name is xxtable).
Situation is that
if I run following script twice (where rowum<=150 means 300 records)
begin
for rec in (select * from xxtable where rownum<=150)
LOOP
XXPACK.XXPROC(rec.somecolumn);
END LOOP;
end;
It works succesfully, but when I running once (where rownum<=300 means 300 records)
begin
for rec in (select * from xxtable where rownum<=300)
LOOP
XXPACK.XXPROC(rec.somecolumn);
END LOOP;
end;
It's gives me some error about can't update some table. I think it blocks some tables or something like this. I need call procedure over all table, I tried use some dmbs_lock.sleep(), dbms_session.reset_package(), pragma autonomous_transaction not helped. I think if I fully clear session details in loop it will not cause error. How I can solve this situation?

Partially consuming a cursor in multiple pl/sql calls without defining it in package spec

I have a large source data set (a few million rows) that requires complex processing, resulting in much larger amount of data, which should be then offloaded and stored as files. The storage requires dividing up resulting data based on certain parameters, namely N source rows that meet certain criteria.
Since it's possible to compute the said parameters within PL/SQL, it was decided that the most efficient way would be to create a package, specify a spec-level cursor for source rows in it, then write a procedure that would partially consume the opened cursor until the criteria is meet and fill temporary tables with resulting data, which would then be offloaded, and the procedure would be called again, repeating until there's no more source rows. PL/SQL basically looks like this:
create or replace PACKAGE BODY generator as
cursor glob_cur_body(cNo number) is
select *
from source_table
where no = cNo
order by conditions;
procedure close_cur is
begin
if glob_cur_body%isopen then
close glob_cur_body;
end if;
end close_cur;
procedure open_cur(pNo number) is
begin
close_cur;
open glob_cur_body(pNo);
end open_cur;
function consume_cur return varchar2 is
v source_table%rowtype;
part_id varchar2(100);
begin
fetch glob_cur_body into v;
if glob_cur_body%notfound then
return null;
end if;
--Clear temporary tables
--Do the processing until criteria is meet of there's no more rows
--Fill the temporary tables and part_id
return part_id;
end consume_cur;
end generator;
And the consumer is doing the following (in pseudocode)
generator.open_cur;
part_id = generator.consume;
while ( part_id != null )
{
//offload data from temp tables
part_id = generator.consume;
}
generator.close_cur;
It's working fine, but unfortunately there's one problem: a spec-level cursor makes the package stateful, meaning that its recompilation results in ORA-04068 for sessions that already accessed it before. It makes maintenance cumbersome, because there's a lot more to the package besides said functions, and it's actively used for unrelated purposes.
So, I want to get rid of the spec-level cursor, but I'm not sure if that's possible. Some ideas I've already discarded:
Re-opening the cursor and skipping N rows: terrible performance, unreliable because affected by any changes to data made between opens
Fetching the source cursor into plsql table: size too large.
Filling up the entire unload tables at once, splitting them later: size too large, subpar performance.
Opening the cursor as refcursor and storing refcursor variable in a dedicated package: impossible, as pl/sql doesn't allow sys_refcursor variables at spec levels
Having open_cur procedures return refcursor, storing it in the offloader, and then somehow passing it to consume_cur: looked viable, but the offloader is in Java, and JDBC doesn't allow binding of SYS_REFCURSOR parameters.
Changing consume_cur to pipelined function: could have worked, but oracle buffers pipelined rows, meaning it would execute multiple times when fetching data from it row-by-row. Also counterintuitive.
Only other idea I've had so far is to make a dedicated package storing said cursor, having open and close procedures and get_cursor returning refcursor; then call get_cursor from generator.consume_cur. That would make the dedicated package (which is unlikely to change) stateful and main package stateless. However, it seems like a half-baked patch rather than a problem solution. Is there a more decent way of achieving what I need? Perhaps changing the logic completely without affecting performance and storage limits too much.
I have a problem to understand your question. But I can provide clarification for your ideas.
Opening the cursor as refcursor and storing refcursor variable in a
dedicated package: impossible, as pl/sql doesn't allow sys_refcursor
variables at spec levels
The workaround with dbms_sql.
create table test_rows as (select level rr from dual connect by level <= 100);
create or replace package cursor_ctx is
ctx_number integer;
end;
declare
p_cursor sys_refcursor;
begin
open p_cursor for 'select rr from test_rows';
cursor_ctx.ctx_number := DBMS_SQL.TO_CURSOR_NUMBER(p_cursor);
end;
This part consuming is data from the cursor.
declare
p_cursor sys_refcursor;
type l_number is table of number;
v_numbers l_number;
begin
if DBMS_SQL.IS_OPEN(cursor_ctx.ctx_number) then
p_cursor := DBMS_SQL.TO_REFCURSOR( cursor_ctx.ctx_number);
fetch p_cursor bulk collect into v_numbers limit 10;
if v_numbers.count < 10 then
dbms_output.put_line('No more data, close cursor');
close p_cursor;
cursor_ctx.ctx_number := null;
else
cursor_ctx.ctx_number := DBMS_SQL.TO_CURSOR_NUMBER(p_cursor);
end if;
for i in nvl(v_numbers.first,1) .. nvl(v_numbers.last,-1) loop
dbms_output.put_line(v_numbers(i));
end loop;
else
dbms_output.put_line('Null or cursor close ');
end if;
end;
Pipelined function has future to split input cursor into chunk. Parallel Enabled Pipelined Table Functions
JDBC allows using sys_refcursor as an output parameter. sys_refcursor = ResultSet.

Rollback procedure creating file

I have several PL/SQL procedures that export tables in a file using UTL_FILE.
Here's a snap:
PROCEDURE export_t1
AS
l_file UTL_FILE.FILE_TYPE;
record VARCHAR2(4096);
BEGIN
l_file := UTL_FILE.FOPEN(DIRECTORY_PATH, FILENAME, 'A');
FOR j IN
(SELECT * FROM PRODUCTS WHERE HANDLE = '0')
LOOP
l_record := j.id || ',' || j.code || ',' || j.desc ....... [others fields];
UTL_FILE.PUT_LINE(l_file,l_record);
END LOOP;
UTL_FILE.FCLOSE(l_file);
UPDATE PRODUCTS SET HANDLE = '1' WHERE HANDLE = '0';
EXCEPTION
WHEN OTHERS THEN
-- log
RAISE;
END export_t1;
So I have export_t1, export_t2, export_tn procedures. In addition I call these in a 'main' procedure sequentially..
My question is..if I have an exception in export_t2, which is the second
procedure, how can I block the first one (export_t1) to create the file
The idea is..create files just when those ALL procedures are gone OK, no exception
Unless you could get your file system to participate in a two-phase commit (which to my knowledge isn't possible right now), coordinating file output with your database transactions is going to be difficult because your file operations lie outside the scope of your database transaction.
I.e., there is always a theoretical scenario where something happens at exactly the wrong time and your database and file system are out of sync. (Sort of makes you appreciate everything COMMIT does for us).
Anyway, a possible strategy is to design things so the window for something going wrong is as short as possible. E.g.,
begin
delete_real_files; -- delete leftovers.
write_temp_file_n1;
write_temp_file_n2;
write_temp_file_n3;
...
write_temp_file_nx;
rename_temp_files_to_real;
commit;
-- don't do anything else with the files after this point
exception
when others then
remove_real_files;
remove_temp_files;
rollback;
end;
The idea here is that you write all the files to temp files. If there is a failure, you clean them up. No process could ever see the "real" files, because you never created them. Only at the end do you make the temporary files real, by renaming them.
Your risk here is that your first few temp files get renamed successfully, but the subsequent temp files cannot get renamed AND either (A) a process jumps in and sees them before your exception handler can remove them or (B) the exception handler cannot remove them for some reason.
I like this approach because it ties all the risk to renaming files, which is a pretty safe operation (since it does not require extra disk space). It's not very likely that some of the renames will succeed and some will fail.
Lots of variations on this approach are possible. But the thing to remember is that you're not implementing a rock-solid solution here. There's always a chance that something goes wrong, so implement whatever checks (elsewhere in your system) are required, depending on how much fault tolerance you have.
UTL_FILE.FCLOSE (or UTL_FILE.FFLUSH) literally write to disk. If you don't want to write to disk you must not write to disk - don't close or flush a file handler until after all the data has been written to each individual buffer.
Depending on how big n is you could have a lot of open file handlers with a lot of data buffered. This won't be pretty.
It would be better to create another procedure to call UTL_FILE.FREMOVE, which removes a named file (assuming sufficient privileges).
I would be doing this in the Oracle scheduler, given each procedure being a separate step in a chain you can define a rule using the scheduler chain condition syntax to call the procedure to remove the files on an error in te chain.
Michael,
You can probably use the --> utl_file.fremove(DIRECTORY_PATH,FILENAME); in the exception in order to delete the file.
Example code is given below.
Procedure 1 :
CREATE OR REPLACE PROCEDURE SHAREFLE IS
v_MyFileHandle UTL_FILE.FILE_TYPE;
BEGIN
v_MyFileHandle := UTL_FILE.FOPEN('TEST_DIR','HELLO.TXT','a');
UTL_FILE.PUT_LINE(v_MyFileHandle, 'Hello Again for the Second Time! ' ||
TO_CHAR(SYSDATE,'MM-DD-YY HH:MI:SS AM'));
UTL_FILE.FCLOSE(v_MyFileHandle);
SHAREFLE1;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE
('ERROR ' || TO_CHAR(SQLCODE) || SQLERRM);
NULL;
END;
Procedure 2:
CREATE OR REPLACE PROCEDURE SHAREFLE1 IS
v_MyFileHandle UTL_FILE.FILE_TYPE;
BEGIN
v_MyFileHandle := UTL_FILE.FOPEN('TEST_DIR','HELLO.TXT','a');
UTL_FILE.PUT_LINE(v_MyFileHandle, 'Hello Again for the Third Time! ' || TO_CHAR(SYSDATE,'MM-DD-YY HH:MI:SS AM'));
UTL_FILE.FCLOSE(v_MyFileHandle);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE
('ERROR ' || TO_CHAR(SQLCODE) || SQLERRM);
utl_file.fremove('TEST_DIR','HELLO.TXT');
NULL;
END;
Plsql block to call the first procedure.
set serveroutput on;
begin
sharefle;
end;
This code is a pretty simple example of what you have asked. If there is any exception you can check the procedure 2 that in the exception block the file 'HELLO.TXT' is removed(both Procedure 1 and procedure 2 has same file). I have personally checked it and the same is working. Try to create your own exception and check for yourself. In case of any doubt please do comment.
Note: This is NEVER THE BEST WAY TO DO IT. I HAVE SHOWED YOU THAT WE CAN DO IT THIS WAY. Thanks :)

Oracle - Update statement in Stored Procedure does not work once in a while

We have a stored procedure that inserts data into a table from some temporary tables in oracle database. There is an update statement after the insert that updates a flag in the same table based on certain checks. At the end of the stored procedure commit happens.
The problem is that the update works on 95% cases but in some cases it fails to update. When we try to run it again without changing anything, it works. Even trying to execute the same stored procedure on the same data at some other time, works perfectly. I haven't found any issues in the logic in the stored procedure. I feel there is some database level issue which we are not able to find (maybe related to concurrency). Any ideas on this would be very helpful.
Without seeing the source code we will just be guessing. The most obvious suggestion that I can think of is that it hits an exception somewhere along the way in some cases and never gets as far as the commit. Another possibility is that there is a lock on the table during execution when it fails.
Probably the best thing to investigate further would be to add an exception handler that writes the exceptions to some table or file and see what error is raised e.g.
-- create a logging table
create table tmp_error_log (timestamp timestamp(0), Error_test varchar2(1000));
-- add a variable to your procedure declaration
v_sql varchar2(1000);
-- add an exception handler just before the final end; statement on your procedure
exception
when others then
begin
v_sql := 'insert into tmp_error_log values(''' ||
to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') || ''', ''' || SQLERRM || ''')';
dbms_output.put_line(v_sql);
execute immediate v_sql;
commit;
end;
-- see what you get in the table
select * from tmp_error_log;

Oracle Forms - Commit Single SQL Statement Instead of Entire Form

I'm working on an Oracle Form (10g) that has two blocks on a single canvas. The top block is called QUERY_BLOCK which the user fills out to fill PRICING_BLOCK with rows of data.
However, in QUERY_BLOCK I also have a checkbox which needs to perform an INSERT and DELETE on the database, respectively. My WHEN-CHECKBOX-CHANGED trigger looks like this:
begin
if :query_block.profile_code is not null then
if :query_block.CHECKBOX_FLAG = 'Y' then
begin
INSERT INTO profile_table VALUES ('Y', :query_block.profile_code);
end;
else
begin
DELETE FROM profile_table WHERE profile_code = :query_block.profile_code and profile_type_code = 'FR';
end;
end if;
end if;
end;
I know that I need to add some sort of commit statement in here, otherwise the record locks and nothing actually happens. However, if I do a COMMIT; then the entire form goes through validation and updates any changed rows.
How do I execute these one-line queries I have without the rest of my form updating as well?
Without commenting on the actual wisdom of this, you could create a procedure in the database that performed an autonomous transaction:
CREATE OR REPLACE FUNCTION my_fnc(p_flag IN VARCHAR2)
RETURN VARCHAR2 IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
IF p_flag = 'Y' THEN
INSERT...
ELSE
DELETE...
END IF;
COMMIT;
RETURN 'SUCCESS';
EXCPTION
WHEN OTHERS THEN
RETURN 'FAIL';
END;
Your Forms code could then look like:
begin
if :query_block.profile_code is not null then
stat := my_fnc(:query_block.CHECKBOX_FLAG);
end if;
end;
This allows your function to commit independent of the calling transaction. Beware of this, however - if your outer transaction must roll back, the autonomous transaction will still be committed. I would think there should be a transactional way to do what you need done to solve your locking problem, which would likely be the superior approach. Without knowing the specifics of your process, I can't tell. Generally speaking, autonomous transactions are used when an update must occur regardless of whether the transaction commits or rolls back, e.g., logging.

Resources