I need your help. We have an old Oracle 10g database and I would like your recommendations on what maintenance tasks should be performed and when. e.g. #1 update fixed stats once a month, #2 Update table statistics once a week, #3 check for index fragmentation weekly and rebuild table column indexes when "x".
does anyone have any maintenance checklists or outlines some sort of guide of what needs to be checked?
Any help is appreciated.
nothing yet has been done
Related
please bare with me if this is a not good place to ask such question.
Anyone familiar with NetSuite backend oracle database know what's the difference between System_notes_custom and System_notes table? Which table should I choose if just want to monitor the record changes on netsuite? Thanks!
Is there a way in oracle dB to See how many records in a table got inserted updated and deleted in the schema? Right now I am using USER_TAB_MODIFICATIONS and I am having an issue if I want to get the daily count then I have to gather stats on the table on daily basis which I want to avoid cuz a lot of my tables having millions of records and gather stats will take a lot of time to run? Can some one head me in a right direction. I will really appreciate all the help. Thanks
I have few advices :
SYS.DBA_TAB_MODIFICATIONS gives you totalnumbers ...
If you want daily counts, you can alternatively alter your tables to store insert update delete dates on your table (soft delete , maybe even soft update ...)
*But i think , you can use a tool to monitor it. My tool is : Real Time Oracle Monitoring and Performance Analytics Tool.
You can give it a chance
We have requirement to audit change history information,that includes capture old value and new value in transaction for update and delete(old value) operation. I have implemented triggers on a table but as number of tables are increasing I feel Oracle trigger option is not suggested.
Could any one suggest some better option for audit change history.
There are many technologies already implemented by Oracle, some of them require to be licensed separately, some not, to allow you to store, view and manage historical data.
Starting from Oracle 9i flashback version query technology can be used to get the previous version of a row - how data looked like before they got updated or deleted.
Oracle Workspace Manager allows you to version-enable tables to keep different versions of a row.
Starting from Oracle 11g, Total Recall technology(licensed separately) can be used to conveniently store, manage and view historical data.
What is more optimal in Oracle 11G? Drop Recreate Indexes or just dbms_stats.gather.
Please advice.
Thanks,
Ram
I think 2 of the suggestions helped me to get a better understanding of what needs to be done.
why rebuilding indexes? If doing this on schedule, maybe consider this first:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6601312252730
- tbone
Dropping and recreating indexes won't do that (and might make things worse, see tbone's link); unless you're talking about dropping, loading and then recreating. Gathering statistics will only help if the problem is that the existing statistics are stale or inaccurate and the explain plan(s) indicate it's not working as expected. You need to explain what you're doing and what problem you're seeing - Alex Poole
I need some help in auditing in Oracle. We have a database with many tables and we want to be able to audit every change made to any table in any field. So the things we want to have in this audit are:
user who modified
time of change occurred
old value and new value
so we started creating the trigger which was supposed to perform the audit for any table but then had issues...
As I mentioned before we have so many tables and we cannot go creating a trigger per each table. So the idea is creating a master trigger that can behaves dynamically for any table that fires the trigger. I was trying to do it but no lucky at all....it seems that Oracle restricts the trigger environment just for a table which is declared by code and not dynamically like we want to do.
Do you have any idea on how to do this or any other advice for solving this issue?
If you have 10g enterprise edition you should look at Oracle's Fine-Grained Auditing. It is definitely better than rolling your own.
But if you have a lesser version or for some reason FGA is not to your taste, here is how to do it. The key thing is: build a separate audit table for each application table.
I know this is not what you want to hear because it doesn't match the table structure you outlined above. But storing a row with OLD and NEW values for each column affected by an update is a really bad idea:
It doesn't scale ( a single update touching ten columns spawns ten inserts)
What about when you insert a record?
It is a complete pain to assemble the state of a record at any given time
So, have an audit table for each application table, with an identical structure. That means including the CHANGED_TIMESTAMP and CHANGED_USER on the application table, but that is not a bad thing.
Finally, and you know where this is leading, have a trigger on each table which inserts a whole record with just the :NEW values into the audit table. The trigger should fire on INSERT and UPDATE. This gives the complete history, it is easy enough to diff two versions of the record. For a DELETE you will insert an audit record with just the primary key populated and all other columns empty.
Your objection will be that you have too many tables and too many columns to implement all these objects. But it is simple enough to generate the table and trigger DDL statements from the data dictionary (user_tables, user_tab_columns).
You don't need write your own triggers.
Oracle ships with flexible and fine grained audit trail services. Have a look at this document (9i) as a starting point.
(Edit: Here's a link for 10g and 11g versions of the same document.)
You can audit so much that it can be like drinking from the firehose - and that can hurt the server performance at some point, or could leave you with so much audit information that you won't be able to extract meaningful information from it quickly, and/or you could end up eating up lots of disk space. Spend some time thinking about how much audit information you really need, and how long you might need to keep it around. To do so might require starting with a basic configuration, and then tailoring it down after you're able to get a sample of the kind of volume of audit trail data you're actually collecting.