Rollback procedure creating file - oracle

I have several PL/SQL procedures that export tables in a file using UTL_FILE.
Here's a snap:
PROCEDURE export_t1
AS
l_file UTL_FILE.FILE_TYPE;
record VARCHAR2(4096);
BEGIN
l_file := UTL_FILE.FOPEN(DIRECTORY_PATH, FILENAME, 'A');
FOR j IN
(SELECT * FROM PRODUCTS WHERE HANDLE = '0')
LOOP
l_record := j.id || ',' || j.code || ',' || j.desc ....... [others fields];
UTL_FILE.PUT_LINE(l_file,l_record);
END LOOP;
UTL_FILE.FCLOSE(l_file);
UPDATE PRODUCTS SET HANDLE = '1' WHERE HANDLE = '0';
EXCEPTION
WHEN OTHERS THEN
-- log
RAISE;
END export_t1;
So I have export_t1, export_t2, export_tn procedures. In addition I call these in a 'main' procedure sequentially..
My question is..if I have an exception in export_t2, which is the second
procedure, how can I block the first one (export_t1) to create the file
The idea is..create files just when those ALL procedures are gone OK, no exception

Unless you could get your file system to participate in a two-phase commit (which to my knowledge isn't possible right now), coordinating file output with your database transactions is going to be difficult because your file operations lie outside the scope of your database transaction.
I.e., there is always a theoretical scenario where something happens at exactly the wrong time and your database and file system are out of sync. (Sort of makes you appreciate everything COMMIT does for us).
Anyway, a possible strategy is to design things so the window for something going wrong is as short as possible. E.g.,
begin
delete_real_files; -- delete leftovers.
write_temp_file_n1;
write_temp_file_n2;
write_temp_file_n3;
...
write_temp_file_nx;
rename_temp_files_to_real;
commit;
-- don't do anything else with the files after this point
exception
when others then
remove_real_files;
remove_temp_files;
rollback;
end;
The idea here is that you write all the files to temp files. If there is a failure, you clean them up. No process could ever see the "real" files, because you never created them. Only at the end do you make the temporary files real, by renaming them.
Your risk here is that your first few temp files get renamed successfully, but the subsequent temp files cannot get renamed AND either (A) a process jumps in and sees them before your exception handler can remove them or (B) the exception handler cannot remove them for some reason.
I like this approach because it ties all the risk to renaming files, which is a pretty safe operation (since it does not require extra disk space). It's not very likely that some of the renames will succeed and some will fail.
Lots of variations on this approach are possible. But the thing to remember is that you're not implementing a rock-solid solution here. There's always a chance that something goes wrong, so implement whatever checks (elsewhere in your system) are required, depending on how much fault tolerance you have.

UTL_FILE.FCLOSE (or UTL_FILE.FFLUSH) literally write to disk. If you don't want to write to disk you must not write to disk - don't close or flush a file handler until after all the data has been written to each individual buffer.
Depending on how big n is you could have a lot of open file handlers with a lot of data buffered. This won't be pretty.
It would be better to create another procedure to call UTL_FILE.FREMOVE, which removes a named file (assuming sufficient privileges).
I would be doing this in the Oracle scheduler, given each procedure being a separate step in a chain you can define a rule using the scheduler chain condition syntax to call the procedure to remove the files on an error in te chain.

Michael,
You can probably use the --> utl_file.fremove(DIRECTORY_PATH,FILENAME); in the exception in order to delete the file.
Example code is given below.
Procedure 1 :
CREATE OR REPLACE PROCEDURE SHAREFLE IS
v_MyFileHandle UTL_FILE.FILE_TYPE;
BEGIN
v_MyFileHandle := UTL_FILE.FOPEN('TEST_DIR','HELLO.TXT','a');
UTL_FILE.PUT_LINE(v_MyFileHandle, 'Hello Again for the Second Time! ' ||
TO_CHAR(SYSDATE,'MM-DD-YY HH:MI:SS AM'));
UTL_FILE.FCLOSE(v_MyFileHandle);
SHAREFLE1;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE
('ERROR ' || TO_CHAR(SQLCODE) || SQLERRM);
NULL;
END;
Procedure 2:
CREATE OR REPLACE PROCEDURE SHAREFLE1 IS
v_MyFileHandle UTL_FILE.FILE_TYPE;
BEGIN
v_MyFileHandle := UTL_FILE.FOPEN('TEST_DIR','HELLO.TXT','a');
UTL_FILE.PUT_LINE(v_MyFileHandle, 'Hello Again for the Third Time! ' || TO_CHAR(SYSDATE,'MM-DD-YY HH:MI:SS AM'));
UTL_FILE.FCLOSE(v_MyFileHandle);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE
('ERROR ' || TO_CHAR(SQLCODE) || SQLERRM);
utl_file.fremove('TEST_DIR','HELLO.TXT');
NULL;
END;
Plsql block to call the first procedure.
set serveroutput on;
begin
sharefle;
end;
This code is a pretty simple example of what you have asked. If there is any exception you can check the procedure 2 that in the exception block the file 'HELLO.TXT' is removed(both Procedure 1 and procedure 2 has same file). I have personally checked it and the same is working. Try to create your own exception and check for yourself. In case of any doubt please do comment.
Note: This is NEVER THE BEST WAY TO DO IT. I HAVE SHOWED YOU THAT WE CAN DO IT THIS WAY. Thanks :)

Related

oracle clear session in LOOP

I,ve a very strange situation. I don't know how to solve. There exists not mine package,let's call him XXPACK, and I must call this package's procedure in loop over table(Assume name is xxtable).
Situation is that
if I run following script twice (where rowum<=150 means 300 records)
begin
for rec in (select * from xxtable where rownum<=150)
LOOP
XXPACK.XXPROC(rec.somecolumn);
END LOOP;
end;
It works succesfully, but when I running once (where rownum<=300 means 300 records)
begin
for rec in (select * from xxtable where rownum<=300)
LOOP
XXPACK.XXPROC(rec.somecolumn);
END LOOP;
end;
It's gives me some error about can't update some table. I think it blocks some tables or something like this. I need call procedure over all table, I tried use some dmbs_lock.sleep(), dbms_session.reset_package(), pragma autonomous_transaction not helped. I think if I fully clear session details in loop it will not cause error. How I can solve this situation?

Oracle: "I/O: Connection reset" while compiling a procedure on a specific server

on one of our servers, while compiling:
CREATE OR REPLACE PROCEDURE PROC_KO
IS
TABLE_SUFFIX_ VARCHAR2 (100);
QUERY_DROP_ VARCHAR2 (1000);
BEGIN
TABLE_SUFFIX_ := 'TABLE_SUFFIX';
QUERY_DROP_ := 'DROP TABLE ' || 'TMP_' || TABLE_SUFFIX_;
END;
I get the following error:
I/O Error: Connection reset
What could be the cause of this?
Its possible that in your case you're disallowed to create a procedure by a DDL trigger.
CREATE OR REPLACE TRIGGER bcs_trigger
BEFORE CREATE
ON SCHEMA
DECLARE
oper ddl_log.operation%TYPE;
BEGIN
-- in some case
DBMS_SERVICE.DISCONNECT_SESSION(my_session);
END bcs_trigger;
Sounds like I had the same, or at least a similar, problem. I had already successfully created/compiled the procedure and later returned to make some changes using SQL Developer (17.2.0.188). When attempting to compile the procedure I received the same error (Connection Reset).
It was only after creating a new procedure by copying blocks of code was I able to isolate the 2-3 lines that seemed to be causing the problem. A slight rewrite of one of the lines (by using different variable names) seemed to do the trick.
There was nothing about the lines in question that seemed to stand out. Simple for loop using a SQL statement.
Still don't know the root cause of the problem, but making minor changes to the code seemed to allow it to compile.

Oracle - Update statement in Stored Procedure does not work once in a while

We have a stored procedure that inserts data into a table from some temporary tables in oracle database. There is an update statement after the insert that updates a flag in the same table based on certain checks. At the end of the stored procedure commit happens.
The problem is that the update works on 95% cases but in some cases it fails to update. When we try to run it again without changing anything, it works. Even trying to execute the same stored procedure on the same data at some other time, works perfectly. I haven't found any issues in the logic in the stored procedure. I feel there is some database level issue which we are not able to find (maybe related to concurrency). Any ideas on this would be very helpful.
Without seeing the source code we will just be guessing. The most obvious suggestion that I can think of is that it hits an exception somewhere along the way in some cases and never gets as far as the commit. Another possibility is that there is a lock on the table during execution when it fails.
Probably the best thing to investigate further would be to add an exception handler that writes the exceptions to some table or file and see what error is raised e.g.
-- create a logging table
create table tmp_error_log (timestamp timestamp(0), Error_test varchar2(1000));
-- add a variable to your procedure declaration
v_sql varchar2(1000);
-- add an exception handler just before the final end; statement on your procedure
exception
when others then
begin
v_sql := 'insert into tmp_error_log values(''' ||
to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') || ''', ''' || SQLERRM || ''')';
dbms_output.put_line(v_sql);
execute immediate v_sql;
commit;
end;
-- see what you get in the table
select * from tmp_error_log;

Convert a PL/SQL script to a stored procedure

Right now I'm importing and transforming data into an Oracle database as follows:
A program regularly polls specific folders, once a file is found it executes a batch file which does some light transformation in Python & bash and then calls SQL*Loader to load the CSV file into a staging table.
Then, the batch script calls an SQL script (via SQLPlus) to do the final transformation and insert the transformed data into master tables for their respective staging table.
The problem with this method is there's no error-handling on the SQLPlus side, eg. if an 'insert into' statement fails because of a violated constraint (or any other reason), it will still continue to execute the rest of the statements contained in the SQL script.
Ideally, if any exception occurs, I'd prefer all changes to be rolled back and details of the exception inserted into an etl log table.
Stored procedures seem to be a good fit as exception handling is built-in. However, I'm struggling with the syntax - specifically how I can take my big SQL scripts (which are just a combination of INSERT INTO, UPDATE, CREATE, DROP, DELETE, etc statements) and throw them into a stored procedure with some very basic error handling.
What I'm hoping for is either:
a quick & dirty dummy's guide to taking my depressing blob of PL/SQL and get it to execute within a stored procedure OR
Any alternative (if a stored proc isn't appropriate) which offers the same functionality, ie. a way to execute a bunch of SQL statements and rollback if any of these statements throw an exception.
About my attempts - I've tried copying portions of my SQL scripts into a stored procedure but they always fail to compile with the error 'PLS-00103 Encountered the symbol when expecting one of the following'. eg.
CREATE OR REPLACE PROCEDURE ETL_2618A AS
BEGIN
DROP SEQUENCE "METER_REPORTING"."SEQ_2618";
CREATE SEQUENCE SEQ_2618;
END ETL_2618A;
Oracle documentation isn't terribly accessible and I've not had much luck with googling/searching StackOverflow, but I apologise if I've missed something obvious.
To do DDL in PL/SQL you need to use dynamic sql.
CREATE OR REPLACE PROCEDURE testProc IS
s_sql VARCHAR2(500);
BEGIN
s_sql := 'DROP SEQUENCE "METER_REPORTING"."SEQ_2618"';
EXECUTE IMMEDIATE s_sql;
s_sql := 'CREATE SEQUENCE "METER_REPORTING"."SEQ_2618"';
EXECUTE IMMEDIATE s_sql;
EXCEPTION
WHEN OTHERS THEN
NULL;
end testProc;
/
If you run the script in sqlplus you can use:
whenever sqlerror
to control what should happen when an error occurs.
http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12052.htm
Adding exception handling to a PL/SQL proc or script isn't difficult, but of course some coding is required. Here's your procedure dressed up slightly with some very basic error reporting added:
CREATE OR REPLACE PROCEDURE ETL_2618A AS
nCheckpoint NUMBER;
BEGIN
nCheckpoint := 1;
EXECUTE IMMEDIATE 'DROP SEQUENCE "METER_REPORTING"."SEQ_2618"';
nCheckpoint := 2;
EXECUTE IMMEDIATE 'CREATE SEQUENCE SEQ_2618';
RETURN;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ETL_2618A failed at checkpoint ' || nCheckpoint ||
' with error ' || SQLCODE || ' : ' || SQLERRM);
RAISE;
END ETL_2618A;
Not tested on animals - you'll be first! :-)

dbms_output.put_line

Does dbms_output.put_line decrease the performance in plsql code?
Every extra line of code decreases the performance of code. After all, it is an extra instruction to be executed, which at least consumes some CPU. So yes, dbms_output.put_line decreases the performance.
The real question is: does the benefit of this extra line of code outweigh the performance penalty? Only you can answer that question.
Regards,
Rob.
Yes, it's another piece of code that needs to be executed, but unless the output is actually turned on, I think the overhead is quite minimal.
Here's an AskTom question with more details: Is there a performance impact for dbms_output.put_line statements left in packages?
You can look into conditional compilation so that the DBMS_OUTPUT.PUT_LINE are only in the pre-parsed code if the procedure is compiled with the appropriate option.
One question is, has DBMS_OUTPUT.ENABLE been called.
If so, any value in a DBMS_OUTPUT.PUT_LINE will be recorded in the session's memory structure. If you continue pushing stuff in there and never taking it out (which might be the case with some application server connections) you might find that after a few days you have a LOT of stuff in memory.
I use a log table instead of dbms_output. Make sure to setup as autonomous transaction, something like (modify for your needs of course):
create or replace package body somePackage as
...
procedure ins_log(
i_msg in varchar2,
i_msg_type in varchar2,
i_msg_code in number default 0,
i_msg_context in varchar2 default null
) IS PRAGMA AUTONOMOUS_TRANSACTION;
begin
insert into myLogTable
(
created_date,
msg,
msg_type,
msg_code,
msg_context
)
values
(
sysdate,
i_msg,
i_msg_type,
i_msg_code,
i_msg_context
);
commit;
end ins_log;
...
end;
Make sure you create your log table of course. In your code, if you're doing many operations in a loop, you may want to only log once per x num operations, something like:
create or replace myProcedure as
cursor some_cursor is
select * from someTable;
v_ctr pls_integer := 0;
begin
for rec in some_cursor
loop
v_ctr := v_ctr + 1;
-- do something interesting
if (mod(v_ctr, 1000) = 0) then
somePackage.ins_log('Inserted ' || v_ctr || ' records',
'Log',
i_msg_context=>'myProcedure');
end if;
end loop;
commit;
exception
when others then
somePackage.ins_log(SQLERRM, 'Err', i_msg_context=>'myProcedure');
rollback;
raise;
end;
Note that the autonomous transaction will ensure that your log stmt gets inserted, even if an error occurs and you rollback everything else (since its a separate transaction).
Hope this helps...much better than dbms_output ;)
It depends on the ratio of how many times you call dbms_output.put_line versus what else you do in PL/SQL.
Using DMBS_OUTPUT might also be the cause of the following error:
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Resources