Oracle - Transaction with a series of DDL statements - oracle

I have a series of rename table ddl statements that I would like to run within a transaction. During this period, there will also be other sessions that will be running as well which might hijack the tables used for the rename above and cause a resource contention/deadlock.
Is it possible to achieve that in Oracle? Understand that each ddl statements will commit after each execution which will free up the tables for other sessions to hijack. How can I ensure that the current session that is executing the DDL statments complete successfully before other sessions can access the tables?
--LOCK TABLE a
RENAME tbl a to b
--possible contention as commit release the lock on tbl a
RENAME tbl b to c
RENAME tbl c to d
--commit

DDL statements in Oracle are each a transaction. Each DDL statement causes few or many changes in the data dictionary, like obj$. I am not sure, but looking at the major work Oracle has gone through to ensure that locking is not an issue with even the early versions of their platform, I think they found it easier to commit per DDL statement to keep the locks short in time and avoiding dead locks within a session or between sessions doing DDL. Under some circumstances, you can still feel that the Oracle kernel doesn't lock dropping and creating to many objects during production use with ORA-600 thrown at your head.
As a workaround, you can either use the datamodel versioning introduced a few years ago. I have no working experience with it since it is too restricted for my work, but you can find more on it by googling on 'Edition-based redefinition' or going to Oracle manual. It might not be available in the licensed edition of Oracle you are working on.
As a workaround, you can execute the statements during uptime. But this will generally break sessions unless the code your users are executing automatically recovers easily. Remember that each object has an ID and a name. Changing the name might not change the ID, so many pointers to the object will need to be refreshed, leading to ORA-4063 or alike. Oracle has no pause/suspend for sessions as far as I know.

Related

Preserve exclusive table lock after DDL in Oracle

It is a well known fact that in an Oracle database it is not possible to make a transaction out of multiple DDL statements.
However, is there any way to lock a specific set of database objects within the current connection so that after a DDL query is executed, all locks are held until they are explicitly released?
An obvious solution of this kind doesn't work, because executing the DDL statement automatically commits the transaction, and with it, the locks are released:
LOCK TABLE x ....;
LOCK TABLE y ....;
ALTER TABLE x ....; -- Does not work properly since table locks are released here
ALTER TABLE y ....;
ALTER TABLE x ....;
COMMIT;
The DBMS_LOCK option doesn't work either, because it is an advisory lock, and the concurrent thread must respect this lock and at least be aware of its existence.
Moreover, it is not controlled which statements can be executed by concurrent threads/sessions. It is possible to execute a query only in the current session, and it must be ensured that no intermediate queries on tables X and Y are executed from other sessions until the current session has ended.
Are there any ideas how this can be implemented?
PS: Please don't mention the high-level task or XY problem. There is no high-level task. The question is posed exactly as it is.
A bit of a joke (breaks all dependent PL/SQL), but... ;)
ALTER TABLE x RENAME TO x__my_precious;
ALTER TABLE y RENAME TO y__my_precious;
ALTER TABLE x__my_precious ...;
ALTER TABLE y__my_precious ...;
ALTER TABLE x__my_precious RENAME TO x;
ALTER TABLE y__my_precious RENAME TO y;
I'm pretty sure what you're trying to do isn't possible with Oracle's native transaction control. DDL will always end a transaction, so no lock on that object is going to survive it. Even if you immediately attempted to lock it after the DDL, another waiting session could slip in and obtain the lock before you do.
You can, however, serialize access to the table by utilizing another dummy table or row in a dummy table, assuming you control the code of any process wishing to access the table. If this is the case, then before accessing the table, attempt to lock the dummy table or a row in it first, and only if it succeeds continue with accessing the main table. Then the process that does DDL can take out that same lock (preventing other processes from proceeding), then do the DDL in a subroutine (named PL/SQL block) with PRAGMA AUTONOMOUS_TRANSACTION. That way the DDL ends the autonomous transaction rather than the main one, which still holds the lock on the dummy table.
You have to use a dummy table because if you tried to use the same table you want to modify you'll deadlock yourself. Of course, this only works if you can make all other processes do the lock-the-dummy-table safety check before they can proceed.
Lastly, albeit what I said above should work, it is likely that you're trying to do something you shouldn't do. DDL against hot objects isn't a good idea. Whatever you're trying to do, there is probably a better way than modifying objects on the fly like this. Even if you are able to keep other locked out, you are likely to cause object reference errors, package invalidations, SQL cursor invalidations, etc.. it can be a real mess.

DDL changes in last one month for a user in oracle

I want to know ddl(Eg. adding a column in table) changes for specific user after a specific date in oracle? One strategy is by querying 'user_objects' table. Is there any other way to do this?
Can i also find out what ddl(Eg. added column name in table) changes has been done after specific date?
Version - Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
Audit DDL statements on or before the specified date. Capture DDL statements with a DDL trigger. Use a source control and build management system to track the history of changes. Or, if you happen to have every archived log since the specified date, laboriously go through them looking for DDL statements (this will not be a fun exercise if the specified date isn't really recent).
Otherwise, no. You can certainly look at the last_ddl_time in user_objects. But there are DDL statements that aren't changes (GRANT is a DDL statement for example, PL/SQL objects get recompiled automatically when there is DDL on a dependent object, etc) that will update the last_ddl_time without being what most people would consider a change. Unless you enable auditing, the data dictionary isn't going to be able to tell you what DDL caused the last_ddl_time to change so you won't know whether it was something that you consider a change, whether there were multiple changes, or what those changes are. If you happen to be lucky enough that your new column has an index, you could potentially infer when it was added by looking at the creation date of the associated index.

How to check whether a delete has been occured in a table at specified time

Recently, a very strange scenario has been reported from one of of our sites.
Based on our fields, we found that there should be some kind of delete that must have happenend for that scenario
In our application code, there is no delete for that table itself. So we checked in gv$sqlarea(since we use RAC) table whether there are any delete sql for this table. We found nothing.
Then we tried to do the same kind of delete through our PL/SQL Developer. We are able to track all delete through gv$sqlarea or gv$session. But when we use below query, lock, edit and commit in plsql developer, there is no trace
select t.*, t.rowid
from <table>
Something which we are able to find is sys.mon_mods$ has the count of deletes. But it is not stored for a long time, so that we can trace by timestamp
Can anyone help me out to track this down
Oracle Version: 11.1.0.7.0
Type : RAC (5 instances)
gv$sqlarea just shows the SQL statements that are in the shared pool. If the statement is only executed once, depending on how large the shared pool and how many distinct SQL statements are executed, a statement might not be in the shared pool very long. I certainly wouldn't expect that a one-time statement would still be in the shared pool of a reasonably active system after a couple hours.
Assuming that you didn't enable auditing and that you don't have triggers that record deletes, is the system in ARCHIVELOG mode? Do you have the archived logs from the point in time where the row was deleted? If so, you could potentially use LogMiner to look through the archived logs to find the statement in question.

Can you using joins with direct path inserts?

I have tried to find examples but they are all simple with a single where clause. Here is the situation. I have a bunch of legacy data transferred from another database. I also have the "good" tables in that same database. I need to transfer (data-conversion) data from the legacy tables to thew tables. Because this is a different set of tables the data-conversion requires complex joins to put the old data into the new tables correctly.
So, old tables old data.
New tables must have the old data but it requires lots of joins to get that old data into the new tables correctly.
Can I use direct path with lots of joins like this? INSERT SELECT (lots of joins)
Does direct path apply to tables that are already on the same database (transfer between tables)? Is it only for loading tables from say a text file?
Thank you.
The query in your SELECT can be as complex as you'd like with a direct-path insert. The direct-path refers only to the destination table. It has nothing to do with the way that data is read or processed.
If you're doing a direct-path insert, you're asking Oracle to insert the new data above the high water mark of the table so you bypass the normal code that reuses space in existing blocks for new rows to be inserted. It also has to block other inserts since you can't have the high water mark of the table change during a direct-path insert. This probably isn't a big deal if you've got a downtime window in which to do the load but it would be quite problematic if you wanted the existing tables to be available for other applications during the load.
No, on the contrary, it means you need to do a backup after a NOLOGGING load, not that you can't backup the database.
Allow me to elaborate a bit. Normally, when you do DML in Oracle, the before images of the changes you are are making get logged in UNDO, and all the changes (including the UNDO changes) are first written to REDO. This is how Oracle manages transactions, instance recovery, and database recovery. If a transaction is aborted or rolled back, Oracle uses the information in UNDO to undo the changes your transaction made. If the instance crashes, then on instance restart, Oracle will use the information in REDO and UNDO to recover up to the last committed transaction. First, Oracle will read the REDO and roll forward, then, use UNDO to roll back all the transactions that were not committed at the time of the crash. In this way, Oracle is able to recover up to the last committed transaction.
Now, when you specify an APPEND hint on an insert statement, Oracle will execute the INSERT with direct load. This means that data is loaded into brand new, never before used blocks, from above the highwater mark. Because the blocks being loaded are brand new, there is no "before image", so, Oracle can avoid writing UNDO, which improves performance. If the database is in NOARCHIVELOG mode, then Oracle will also not write REDO. On a database in ARCHIVELOG mode, Oracle will still write REDO, unless, before you do the insert /*+ append */, you set the table to NOLOGGING, (i.e. alter table tab_name nologging;). In that case, REDO logging is disabled for the table. However, this is where you could run into backup/recovery implications. If you do a NOLOGGING direct load, and then you suffer a media failure, and the datafile containing the segment with the nologging operation is restored from a backup taken before the nologging load, then the redo log will not contain the changes required to recover that segment. So, what happens? Well, when you do a NOLOGGING load, Oracle writes extent invaldation records to the redo log, instead of the actual changes. Then, if you use that redo in recovery, those data blocks will be marked logically corrupt. Any subsequent queries against that segment will get an ORA-26040 error.
So, how to avoid this? Well, you should always take a backup imediately following any NOLOGGING direct load. If you restore/recover from a backup taken after the nologging load, there is no problem, because the data will be in the datablocks in the file that was restored.
Hope that's clear,
-Mark
Yes, there should not be any arbitrary limits on query complexity.
If you do
insert /*+ APPEND */ into target_table select .... from source1, source2..., sourceN where
It should work fine. Consider though, that the performance of the load will be limited by the performance of that query, so, be sure it's well-tuned, if you're expecting good performance.
Finally, consider whether setting NOLOGGING on the target table would improve performance significantly. But, also consider the backup recovery implications, if you decide to implement NOLOGGING.
Hope that helps,
-Mark

Creating re-runnable Oracle DDL SQL Scripts

Our development team does all of their development on their local machines, databases included. When we make changes to schema's we save the SQL to a file that is then sent to the version control system (If there is a better practice for this I'd be open to hearing about that as well).
When working on SQL Server we'd wrap our updates around "if exists" statements to make them re-runnable. I am not working on an Oracle 10g project and I can't find any oracle functions that do the same thing. I was able to find this thread on dbaforums.org but the answer here seems a bit kludgy.
I am assuming this is for some sort of Automating the Build process and redoing the build from scratch if something fails.
As Shannon pointed out, PL/SQL objects such as Procedures, functions and Packages have the "create or replace" option, so a second recompile/re-run would be ok. Grants should be fine too.
As for Table creations and DDLs, you could take one of the following approaches.
1) Do not add any drop commands to the scripts and ask your development team to come up with the revert-script for the individual modules.
So for each create table that they add to the build, they will have an equivalent "DROP TABLE.." added to a script say."build_rollback.sql". If your build fails , you can run this script before running the build from scratch.
2)The second (and most frequently used approach I have seen) is to include the DROP table just before the create table statement and then Ignore the"Table or view does not exist" errors in the build log. Something like..
DROP TABLE EMP;
CREATE TABLE EMP (
.......
.......
);
The thread you posted has a major flaw. The most important one is that you always create tables incrementally. Eg, your database already has 100 tables and you are adding 5 more as part of this release. The script spools the DROP Create for all 100 tables and then executes it which does not make a lot of sense (unless you are building your database for the first time).
An SQL*Plus script will continue past errors unless otherwise configured to.
So you could have all of your scripts use :
DROP TABLE TABLE_1;
CREATE TABLE TABLE_1 (...
This is an option in PowerDesigner, I know.
Another choice would be to write a PL/SQL script which scrubs a schema, iterating over all existing tables, views, packages, procedures, functions, sequences, and synonyms in the schema, issuing the proper DDL statement to drop them.
I'd consider decomposing the SQL to create the database; one giant script containing everything for the schema sounds murderous to maintain in a shared environment. Dividing at a Schema / Object Type / Name level might be prudent, keeping fully dependent object types (like Tables and Indexes) together.

Resources