Rollback after coming out of the session in Oracle SQL*Plus - oracle

I am writing a wrapper shell or perl script which does open an oracle session using sqlplus and then execute some sql files by scanning a directory. So as part of this , lets say if we have multiple sql files in a directory,
for eg: first.sql,second.sql,third.sql
I am planning to create a single file(AllSqlFilesInDirectory.sql) with below content.
>cat AllSqlFilesInDirectory.sql
#first.sql
#second.sql
#third.sql
>
Now I am planning to run the file AllSqlFilesInDirectory.sql by opening an oracle sqlplus session.
After executing, I am planning to come out of the oracle sqlplus session and I am planning to search for any errors in the log file.
If there are any errors, I would like to execute rollback. But I think as I am out of that sqlplus session, rollback is not possible. I am just concerned about the DML statements that were executed as part of those multiple sql files in the directory.
So I have these doubts
Can I simply ignore and not be concerned about rollback at all
Can I do the rollback for a session which was already closed?
If above is valid, then how can do it?

Can I simply ignore and not be concerned about rollback at all
That's a business question you'd have to answer. If you want the changes to be rolled back if there is an error, you'd need to do a rollback.
Can I do the rollback for a session which was already closed?
As a practical matter, probably not. Technically, you can using flashback transaction backout but that is generally way more complexity that you'd normally want to deal with.
If above is valid, then how can do it?
Rather than writing to a log file and parsing the log file to determine if there were any errors, it is most likely vastly easier to simply put a
whenever sqlerror exit rollback
at the top of your script. That tells SQL*Plus to rollback the transaction and exit whenever an error is encountered. You don't have to write logic to parse the log file.
Whenever sqlerror documentation

Related

Execute Oracle script with SqlPlus; go to state before script processing if any exceptions

I have a lot of scripts that contain all kinds of transactions including complex DML and DDL.
I need to find a way to run them fully or not at all. I'd like to see following behavior: if any error occurs in the middle of script processing => go back to the state before the script processing.
I thought I would just put whole script into one big transaction, put a savepoint at the beginning and make a rollback to the savepoint in case of any exception, but AFAIK that's impossible, as Oracle does not support nested transactions.
Do you mind sharing your thoughts about that case?
I don't think there is an easy solution for this, because you have DDL in your script. DDL executes commit before processing, so rollback will not help.
As an alternative you could use flashback option of Oracle. But this impacts the entire database. You create a flashback restore point, run the script, if any errors occured then you flashback database to restore point. This will revert all the changes in all the schemas of your database. This is good when you have separate database for running/testing your scripts. It is rather fast. The database should be in archivlog mode.
Another option is to use export/import utility (expdp/impdp). This is also hardly automated in one script, so you do the recovery manually. You take the export dump, run the script, if any errors happened - you restore the dump of your db schemas running impdp.
Perhaps what you need is the "whenever sqlerror exit" clause.
Check it here out

go-sqlite3 with journal_mode=WAL gives 'database is locked' error

In go, I open a sqlite3 database using the mattn/go-sqlite3 module. I set the database journalling mode to WAL immediately after opening using a PRAGMA journal_mode=WAL.
However, if I try to open the database from a second process while the first is running, the second cannot open it and instead gets the "database is locked" error. This happens even if I did not perform any transactions.
The connection string I am using is:
"file:mydbfile.db?cache=shared&mode=rwc"
(I intend to answer my own question, since it took a long time to debug)
If you want to enable journal_mode=WAL, you should add it to the connection string:
"file:mydbfile.db?cache=shared&mode=rwc&_journal_mode=WAL"
As part of opening the database, go-sqlite3 will execute PRAGMA statements to set various defaults. One of these defaults is setting the journal_mode=DELETE. However, if another process has the database opened, the mode cannot be changed back to DELETE. Executing this statement fails with "database is locked" and so you will see the open operation fail with that error.
The complete list of connection string parameters is listed at https://github.com/mattn/go-sqlite3

What happens to not commited transaction in oracle RMAN backup

I have written script for oracle backup and restore using RMAN.
Note i took backup database + archive logs
Now, I did some sql statement in oracle but not commited transaction then it may be somewhere in redo logs i am not sure about it.
Now, In above situation i took backup database + archive log and did restore.
Non-commited data was not present.
I am confuse about this scenario, Does this scenario is correct or it is missing my data or i missed somewhere.
This is perfectly fine. Your transaction is in fact at redo. But since you didn't commit it the recover process rolled it back after reapplying it because it couldn't find a commit statement at the end of the redo stream. This is by design. The opposite would be a problem, if you had committed a statement, no matter what happened with the server (power loss, crashed) you should be able to see it after restoring the server and applying all of redo/archives.
The reason for that is that once you commit, all of the work to reexecute your transaction should be stored at disk (redo log file). There are other types of commit (COMMIT WRITE NOWAIT, for example) that bypass this behaviour and should be avoided.
Hope this helps.

Is it possible to execute a SQL script file stored on the database server using a remote command?

Say I have a SQL script physically stored on the database server. Is there a SQL command I can send Oracle from an application to tell it to execute that script?
(Yes, I know this sounds ridiculous. I'm considering it as part of a work around of a very nasty problem that "shouldn't happen" but does.)
The easiest option would generally be to use the dbms_scheduler package to run an external job. This would let you invoke a shell script that started SQL*Plus, connected to the database, and ran your .sql script.
It would also be possible to create a Java stored procedure that uses Java's ability to call out to the operating system to run the same shell script. That tends to be a bit more of a security issue, though, since you're ending up granting the owner of this procedure privileges to run any command on the database server as the oracle user. That would include things like connecting to the database as SYSDBA or corrupting the database (accidentally or intentionally) so it's something that auditors would generally frown upon.

Is there a way to enable procedure logging for an Oracle Scheduled Job?

I'm trying to enable logging on an Oracle Scheduled Job so that when I look at the run details of the job, I can discern what the procedure of the job worked on and what it did. Currently, the procedure is written to log out through dbms_output.put_line() since this is nice for procedures and SQL*Plus by just enabling set serveroutput on to see it. However, I cannot figure out how to have this logging information show up in the job run details. Specifically, I'm looking at:
select additional_info from dba_scheduler_job_run_details;
Also, this data seems to appear to be displaying in the run details within the enterprise manager under instances of the job.
So, is there a way, a setting or a different package, to have simple logging for an Oracle Scheduled Job?
You could add something at the end of the call. It can call dbms_output.get_lines, load the data into a CLOB and store that somewhere for your perusal. You'd probably want something to call DBMS_OUTPUT.ENABLE at the start too.
Personally, I've avoid using DBMS_OUTPUT.PUT_LINE and have your own logging routine, either to a flat file or a table (using autonomous transactions).
Autonomous transactions is the way to go!
You define a log table with the information you want to log and write a procedure that inserts & commits into this table as an autonomous transaction. That way, you can see what's going on, even when the job is still running.
AFAIK there is no other way to get the information while the job is still running, DBMS_OUTPUT will only display after the job is done.

Resources