I have two tables:
ld_tbl - a partitioned table.
tgt_tabl - a non partitioned table.
In my program I'm executing
alter table ld_tbl exchange partition prt with table tgt_table;
and after the exchange has finished I'm executing a drop to the ld_tbl.
The problem is that if someone has fired a query through the tgt_tabl it throws exception:
ORA-00813: object no longer exists
Even I drop only the ld_tbl and didn't touch the tgt_tabl. After several tests, I'm sure that it's the drop which causes the exception. According to this information: Object no longer exists, the solution is to defer the drop.
My question is: how much time is needed between the drop and the exchange? How can I know that operation like drop will not hurt the other table?
Thanks.
"how much time need to be between the drop and the exchange?"
The pertinent question is, why is anybody running queries on the TGT_TABL. If I understand your situation correctly that is a transient table, used for loading data through Partition Exchange. So no business user ought to be querying it (they should wait until the data goes live in the partitioned table).
If the queries are coming from non-business users (DBAs, support staff) my suggestion would be to just continue as you do now, and send an email to those people explaining why they may occasionally get ORA-00813 errors.
If the queries are coming from business users then it's more difficult. There is no point in deferring the drop, because somebody could be running a query whenever you schedule it. You need to track down the users who are running these queries, discover why they are doing it and figure out whether there's some other way of satisfying the need.
But I don't thinks there's s technical fix you could apply. By using partition exchange you are already minimizing the window in which this error can occur.
Related
I am new to Teradata & fortunately got a chance to work on both DDL-DML statements.
One thing I observed is Teradata is very slow when time comes to UPDATE the data in a table having large number of records.
The simplest way I found on the Google to perform this update is to write an INSERT-SELECT statement with a CASE on column holding values to be update with new values.
But what when this situation arrives in Data Warehouse environment, when we need to update multiple columns from a table holding millions of rows ?
Which would be the best approach to follow ?
INSERT-SELECT only OR MERGE-UPDATE OR MLOAD ?
Not sure if any of the above approach is not used for this UPDATE operation.
Thank you in advance!
At enterprise level, we expect volumes to be huge and updates are often part of some scheduled jobs/scripts.
With huge volume of data, Updates comes as a costly operation that involve risk of blocking table for some time in case the update fails (due to fallback journal). Although scripts are tested well, and failures seldom happen in production environments, it's always better to have data that needs to be updated loaded to a temporary table in required form and inserted back to same table after deleting matching records to maintain SCD-1 (Where we don't maintain history).
I am experiencing a strange problem with an Oracle DB, and would like to ask if anyone has experienced a similar problem:
In ALL of my tables (user_tables) EVERY SINGLE row has been duplicated.
What kind of action could have caused such a thing?
Can I restore the previous state without cleaning every table "by hand"?
I can imagine many actions which could lead to this situation (for example running imp twice), but it doesn't matter. You should simply prevent such duplicates by means of Unique/Primary keys.
As for restoring previous state you may want to read about flashback query feature:
http://docs.oracle.com/cd/B13789_01/appdev.101/b10795/adfns_fl.htm
I need your opinion in this situation. I’ll try to explain the scenario. I have a Windows service that stores data in an Oracle database periodically. The table where this data is being stored is partitioned by date (Interval-Date Range Partitioning). The database also has a dbms_scheduler job that, among other operations, truncates and drops older partitions.
This approach has been working for some time, but recently I had an ORA-00054 error. After some investigation, the error was reproduced with the following steps:
Open one sqlplus session, disable auto-commit, and insert data in the
partitioned table, without committing the changes;
Open another sqlplus session and truncate/drop an old partition (DDL
operations are automatically committed, if I’m not mistaken). We
will then get the ORA-00054 error.
There are some constraints worthy to be mentioned:
I don’t have DBA access to the database;
This is a legacy application and a complete refactoring isn’t
feasible;
So, in your opinion, is there any way of dropping these old partitions, without the risk of running into an ORA-00054 error and without the intervention of the DBA? I can just delete the data, but the number of empty partitions will grow everyday.
Many thanks in advance.
This error means somebody (or something) is working with the data in the partition you are trying to drop. That is, the lock is granted at the partition level. If nobody was using the partition your job could drop it.
Now you say this is a legacy app and you don't want to, or can't, refactor it. Fair enough. But there is clearly something not right if you have a process which is zapping data that some other process is using. I don't agree with #tbone's suggestion of just looping until the lock is released: you can't just get rid of data which somebody is using with establishing why they are still working with data that they apparently should not be using.
So, the first step is to find out what the locking session is doing. Why are they still amending this data your background job wants to retire? Here's a script which will help you establish which session has the lock.
Except that you "don't have DBA access to the database". Hmmm, that's a curly one. Basically this is not a problem which can be resolved without DBA access.
It seems like you have several issues to deal with. Unfortunately for you, they are political and architectural rather than technical, and there's not much we can do to help you further.
How about wrapping the truncate or drop in pl/sql that tries the operation in a loop, waiting x seconds between tries, for a max num of tries. Then use dbms_scheduler to call that procedure/function.
Maybe this can help. Seems to be the same issue as the one that you discribe.
(ignore the comic sans, if you can) :)
Oracle's database change notification feature sends rowids (physical row addresses) on row inserts, updates and deletes. As indicated in the oracle's documentation this feature can be used by the application to built a middle tier cache. But this seems to contradict when we have a detailed look on how row ids work.
ROWID's (physical row addresses) can change when various database operations are performed as indicated by this stackoverflow thread. In addition to this, as tom mentions in this thread clustered tables can have same rowids.
Based on the above research, it doesn't seem to be safe to use the rowid sent during the database change notification as the key in the application cache right? This also raises a question on - Should database change notification feature be used to built an application server cache? or is a recommendation made to restart all the application server clusters (to reload/refresh the cache) when the tables of the cached objects undergo any operations which result in rowid's to change? Would that be a good assumption to be made for production environments?
It seems to me to none of operations that can potentially change the ROWID is an operation that would be carried out in a productive environment while the application is running. Furthermore, I've seen a lot of productive software that uses the ROWID accross transaction (usually just for a few seconds or minutes). That software would probably fail before your cache if the ROWID changed. So creating a database cache based on change notification seems reasonable to me. Just provide a small disclaimer regarding the ROWID.
The only somewhat problematic operation is an update causing a movement to another partition. But that's something that rarely happens because it defeats the purpose of the partitioning, at least if it occurred regularly. The designer of a particular database schema will be able to tell you whether such an operation can occur and is relevant for caching. If none of the tables has ENABLE ROW MOVEMENT set, you don't even need to ask the designer.
As to duplicate ROWIDs: ROWIDs aren't unique globally, they are unique within a table. And you are given both the ROWID and the table name in the change notification. So the tuple of ROWID and table name is a perfect unique key for building a reliable cache.
I need some help in auditing in Oracle. We have a database with many tables and we want to be able to audit every change made to any table in any field. So the things we want to have in this audit are:
user who modified
time of change occurred
old value and new value
so we started creating the trigger which was supposed to perform the audit for any table but then had issues...
As I mentioned before we have so many tables and we cannot go creating a trigger per each table. So the idea is creating a master trigger that can behaves dynamically for any table that fires the trigger. I was trying to do it but no lucky at all....it seems that Oracle restricts the trigger environment just for a table which is declared by code and not dynamically like we want to do.
Do you have any idea on how to do this or any other advice for solving this issue?
If you have 10g enterprise edition you should look at Oracle's Fine-Grained Auditing. It is definitely better than rolling your own.
But if you have a lesser version or for some reason FGA is not to your taste, here is how to do it. The key thing is: build a separate audit table for each application table.
I know this is not what you want to hear because it doesn't match the table structure you outlined above. But storing a row with OLD and NEW values for each column affected by an update is a really bad idea:
It doesn't scale ( a single update touching ten columns spawns ten inserts)
What about when you insert a record?
It is a complete pain to assemble the state of a record at any given time
So, have an audit table for each application table, with an identical structure. That means including the CHANGED_TIMESTAMP and CHANGED_USER on the application table, but that is not a bad thing.
Finally, and you know where this is leading, have a trigger on each table which inserts a whole record with just the :NEW values into the audit table. The trigger should fire on INSERT and UPDATE. This gives the complete history, it is easy enough to diff two versions of the record. For a DELETE you will insert an audit record with just the primary key populated and all other columns empty.
Your objection will be that you have too many tables and too many columns to implement all these objects. But it is simple enough to generate the table and trigger DDL statements from the data dictionary (user_tables, user_tab_columns).
You don't need write your own triggers.
Oracle ships with flexible and fine grained audit trail services. Have a look at this document (9i) as a starting point.
(Edit: Here's a link for 10g and 11g versions of the same document.)
You can audit so much that it can be like drinking from the firehose - and that can hurt the server performance at some point, or could leave you with so much audit information that you won't be able to extract meaningful information from it quickly, and/or you could end up eating up lots of disk space. Spend some time thinking about how much audit information you really need, and how long you might need to keep it around. To do so might require starting with a basic configuration, and then tailoring it down after you're able to get a sample of the kind of volume of audit trail data you're actually collecting.