This question already has answers here:
Creating tables dynamically at runtime
(5 answers)
Closed 8 years ago.
My Customer want to create new table everyday, e.g. MyTable<yyyymmdd>, and then my application will read this table and do some cross-check. After done all cross-check, I need to drop this table. Every day will be create and drop table.
Is it good solution? could be database will crash because of this?
please advice me.
i already give another solution, which customer just insert the data into one table. After that, I'll delete all records once done. But customer refuse to use my solution.
this is a simple life cycle issue. use interval partitioning. easy to manage, good for performance.
There is no point in creating and dropping the table with the same columns everyday.
If the columns in that table are changing everyday you can go ahead with the approach.
Your database will not crash because of this. Just make sure that services are not accessing the table when it is dropped.
Related
I have a use case where we need to store some data for a specific period(lets say, 10k rows for 5 mins) and since data is huge so can`t store in Java Memory. Please help me to understand which one is the best approach from following and why?
Create a permanent table and introduce a column, which will help me to fetch and drop the rows as per the session.
Create multiple temporary table for each session and drop just after the process.
Thanks in Advance.
Its sound like you just need to make use of GTTs, please check out the below documentation for the same, you dont need to worry about truncating or dropping the table as data will be stored only till the session lasts.
Documentation
We are working on a new release of our product and we want to implement a feature where a user can view older data for a disease prediction made by the application. For example, the user would have an option to go back in time to see the predictions made one year ago. At the database level what needs to happen is to fetch archived data.The number of tables in the database is around 200 and out of them the tables that need to go back to an older state
I read about Flashbacks and although they seem to be used more for recovery, was curious to know if they can be used.
1> Would it be possible to use Flashbacks?
2> If yes, how would it affect performance?
3> If no what could be some other options?
Thank you
Flashback could be a way, but you need to use flashback data archive for the table you want. Using this technology, you can choose how much time you want to be able to store. What i find interesting using flashback technology, is that you query the same table (with some aditional options), instead of the other option, of creating a history table.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The application I'm working on has a legacy problem where 2 tables were created ADULT and CHILD in an oracle 11g dB.
This has led to a number of related tables that have both a field for ADULT and CHILD no FK applied.
The bugs have arisen where poor development has mapped relationships to the wrong field.
Our technical architect plans to merge the ADULT and CHILD tables in to a new ADULT_CHILD table and create materialised views in place of the tables. The plan is to also create a new id value and replace the I'd values in all associated tables so even if the plsql/apex code maps to the wrong field the data mapping will still be correct.
The reasoning behind this solution it it does not require that we change any other code.
My opinion is this is a fudge but I'm more a Java/.NET OO.
What arguments can I use to convince the architect this is wrong and not a real solution. I'm concerned we are creating a more complex solution and performance will be an issue.
Thanks for any pointers
While it may be a needed solution it might also create new issues. If you really do need to use an MV that is up to date at all times, you need on commit refresh and that in turn tends to make all updates sequential. Meaning that all processes writing to it waits in line for the one updating the table to commit. Not, the table, not the row.
So it is prudent to test the approach with realistic loads. Why does it have to become a single table? Could they not stay separate, add a FK? If you need more control on the updates, rename them and put views with instead-of triggers in their place.
I need some help in auditing in Oracle. We have a database with many tables and we want to be able to audit every change made to any table in any field. So the things we want to have in this audit are:
user who modified
time of change occurred
old value and new value
so we started creating the trigger which was supposed to perform the audit for any table but then had issues...
As I mentioned before we have so many tables and we cannot go creating a trigger per each table. So the idea is creating a master trigger that can behaves dynamically for any table that fires the trigger. I was trying to do it but no lucky at all....it seems that Oracle restricts the trigger environment just for a table which is declared by code and not dynamically like we want to do.
Do you have any idea on how to do this or any other advice for solving this issue?
If you have 10g enterprise edition you should look at Oracle's Fine-Grained Auditing. It is definitely better than rolling your own.
But if you have a lesser version or for some reason FGA is not to your taste, here is how to do it. The key thing is: build a separate audit table for each application table.
I know this is not what you want to hear because it doesn't match the table structure you outlined above. But storing a row with OLD and NEW values for each column affected by an update is a really bad idea:
It doesn't scale ( a single update touching ten columns spawns ten inserts)
What about when you insert a record?
It is a complete pain to assemble the state of a record at any given time
So, have an audit table for each application table, with an identical structure. That means including the CHANGED_TIMESTAMP and CHANGED_USER on the application table, but that is not a bad thing.
Finally, and you know where this is leading, have a trigger on each table which inserts a whole record with just the :NEW values into the audit table. The trigger should fire on INSERT and UPDATE. This gives the complete history, it is easy enough to diff two versions of the record. For a DELETE you will insert an audit record with just the primary key populated and all other columns empty.
Your objection will be that you have too many tables and too many columns to implement all these objects. But it is simple enough to generate the table and trigger DDL statements from the data dictionary (user_tables, user_tab_columns).
You don't need write your own triggers.
Oracle ships with flexible and fine grained audit trail services. Have a look at this document (9i) as a starting point.
(Edit: Here's a link for 10g and 11g versions of the same document.)
You can audit so much that it can be like drinking from the firehose - and that can hurt the server performance at some point, or could leave you with so much audit information that you won't be able to extract meaningful information from it quickly, and/or you could end up eating up lots of disk space. Spend some time thinking about how much audit information you really need, and how long you might need to keep it around. To do so might require starting with a basic configuration, and then tailoring it down after you're able to get a sample of the kind of volume of audit trail data you're actually collecting.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can someone enlighten on why is that oracle does not support an autoincrement feature for primary keys?
I know the same feature can be achieved with the help of sequence and triggers, but why oracle didn't introduce the autoincrement keyword which will internally create a sequence and a trigger. I bet guys in oracle would have definitely thought about this. There must be some reason for not giving this feature. Any thoughts?
It may just be terminology.
'AUTOINCREMENT' implies that that record '103' will get created between records '102' and '104'. In clustered environments, that isn't necessarily the case for sequences. One node may insert '100','101','102' while the other node is inserting '110','111','112', so the records are 'out of order'. [Of course, the term 'sequence' has the same implication.]
If you choose not to follow the sequence model, then you introduce locking and serialization issues. Do you force an insert to wait for the commit/rollback of another insert before determining what the next value is, or do you accept that, if a transaction rolls back, you get gaps in the keys.
Then there's the issue about what you do if someone wants to insert a row into the table with a specific value for that field (ie is it allowed, or does it work like a DEFAULT) or if someone tries to update it. If someone inserts '101', does the autoincrement 'jump' to '102' or do you risk attempted duplicate values.
It can have implications for their IMP utilities and direct path writes and backwards compatibility.
I'm not saying it couldn't be done. But I suspect in the end someone has looked at it and decided that they can spend the development time better elsewhere.
Edit to add:
In Oracle 12.1, support for an IDENTITY column was added.
"The identity column will be assigned an increasing or decreasing integer value from a sequence generator for each subsequent INSERT statement. You can use the identity_options clause to configure the sequence generator."
https://docs.oracle.com/database/121/SQLRF/statements_7002.htm#CJAHJHJC
This has been a bone of contention for quite some time between the various DB camps. For a database system as polished and well-built as Oracle, it still stuns me that it requires so much code and effort to enable this commonly-used and valuable feature.
I recommend just putting some kind of incremental-primary-key builder/function/tool in your toolkit and have it handy for Oracle work. And write your congressman and tell him how bad they need to make this feature available from the GUI or using a single line of SQL!
Because it has sequences, which can do everything autoincrement does, and then some.
Many have complained of this, but the answer generally is that you can create one easily enough with a sequence and a trigger.
Sequences can get out of sync easily (someone inserts a record manually in the database without updating the sequence). Oracle should have implemeted this ages ago!
Sequences are easy to use but not as easy as autoincrement (they require extra bit of coding).