I work in a team where many of us connect to the same oracle database. So anyone is permitted to do any changes to the database. I want to track those changes, specifically in the table structure, indexes, primary/foreign keys etc. I want to refer this history to check what had been changed off late and by which user along with modified date. Is there a script or any way to do this? Awaiting your suggestions,
Thanks.
Related
At work, my team accesses and works in a number of different databases using our team login. We have a ton of tables and views in each respective schema and I would guess that only ~10% are used regularly. As such, I would like to clean up these schemas to keep only those tables and views which are actually used and delete all the other ones (or at least archive them).
Is there any way for me to see the last time that a view was run, or the last time that a table was queried? My thinking is that if I can see that a view/table hasn't been used in x amount of time, then I'd feel more comfortable dropping it. My fear is that without such a process, I might drop tables/views that are used in Tableau dashboards and for other purposes.
Please check this Link
DBA_HIST tables can show you data depending till what date data is stored but not beyond that and it wont be conclusive.
I'm looking at doing a data-fix and need to be able to prove that the data I have intended to change is the only data changed. (For example - that a trigger hasn't modified additional columns that I wasn't expecting)
I've been looking at Oracle's Flashback Query process, which would be great, except that this is not enabled on the database in question.
Since this check would be carried out prior to committing the transaction, Oracle must have the "before" information squirreled away somewhere, and I wondered if there is any way of accessing this undo information?
Otherwise, I would potentially have to make a temporary copy of each table and do a compare between the live table and the backup, which may also result in inconsistencies between the backup query time and the update transaction time.
While I'm expecting the answer "no", I'm hoping someone can point me in a better direction than that which I appear to be headed at present.
Thanks!
We are working on a new release of our product and we want to implement a feature where a user can view older data for a disease prediction made by the application. For example, the user would have an option to go back in time to see the predictions made one year ago. At the database level what needs to happen is to fetch archived data.The number of tables in the database is around 200 and out of them the tables that need to go back to an older state
I read about Flashbacks and although they seem to be used more for recovery, was curious to know if they can be used.
1> Would it be possible to use Flashbacks?
2> If yes, how would it affect performance?
3> If no what could be some other options?
Thank you
Flashback could be a way, but you need to use flashback data archive for the table you want. Using this technology, you can choose how much time you want to be able to store. What i find interesting using flashback technology, is that you query the same table (with some aditional options), instead of the other option, of creating a history table.
I'm trying to make a database table for every single username. I see that for every username, I can add more columns in it's row, but I want to attribute a full table for each one. How can I do that?
Thanks,
Eli
First let me say, what you are trying to do sounds like really, really bad database design and you should rethink your idea of creating a table per user. To get help for this you should add way more detail about the reasoning to your question to get a good answer. As far as I know there is also a maximum number of classes you can create on Parse so sooner or later you will run into problems, either performance wise or due to technical limitations of the platform.
That being said, you can use the Schema API to programmatically create/delete/update tables of your Parse app. It always requires the master key, so doing this from the client side is not recommended for security reasons. You could put this into a Cloud Code function for example and call this one from your app/admin tool to create a new table for a user on the fly or delete a table of a user.
Again, better don't do it and think about a better way to design your database, it would be out of scope here to discuss it.
I need some help in auditing in Oracle. We have a database with many tables and we want to be able to audit every change made to any table in any field. So the things we want to have in this audit are:
user who modified
time of change occurred
old value and new value
so we started creating the trigger which was supposed to perform the audit for any table but then had issues...
As I mentioned before we have so many tables and we cannot go creating a trigger per each table. So the idea is creating a master trigger that can behaves dynamically for any table that fires the trigger. I was trying to do it but no lucky at all....it seems that Oracle restricts the trigger environment just for a table which is declared by code and not dynamically like we want to do.
Do you have any idea on how to do this or any other advice for solving this issue?
If you have 10g enterprise edition you should look at Oracle's Fine-Grained Auditing. It is definitely better than rolling your own.
But if you have a lesser version or for some reason FGA is not to your taste, here is how to do it. The key thing is: build a separate audit table for each application table.
I know this is not what you want to hear because it doesn't match the table structure you outlined above. But storing a row with OLD and NEW values for each column affected by an update is a really bad idea:
It doesn't scale ( a single update touching ten columns spawns ten inserts)
What about when you insert a record?
It is a complete pain to assemble the state of a record at any given time
So, have an audit table for each application table, with an identical structure. That means including the CHANGED_TIMESTAMP and CHANGED_USER on the application table, but that is not a bad thing.
Finally, and you know where this is leading, have a trigger on each table which inserts a whole record with just the :NEW values into the audit table. The trigger should fire on INSERT and UPDATE. This gives the complete history, it is easy enough to diff two versions of the record. For a DELETE you will insert an audit record with just the primary key populated and all other columns empty.
Your objection will be that you have too many tables and too many columns to implement all these objects. But it is simple enough to generate the table and trigger DDL statements from the data dictionary (user_tables, user_tab_columns).
You don't need write your own triggers.
Oracle ships with flexible and fine grained audit trail services. Have a look at this document (9i) as a starting point.
(Edit: Here's a link for 10g and 11g versions of the same document.)
You can audit so much that it can be like drinking from the firehose - and that can hurt the server performance at some point, or could leave you with so much audit information that you won't be able to extract meaningful information from it quickly, and/or you could end up eating up lots of disk space. Spend some time thinking about how much audit information you really need, and how long you might need to keep it around. To do so might require starting with a basic configuration, and then tailoring it down after you're able to get a sample of the kind of volume of audit trail data you're actually collecting.